Towards BeagleBoard "CMOS RAM"
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 ... 18
Dave Higton (281) 668 posts |
Jeffrey, can you send me a copy of the “problem to DOSFS” image that you created, please? Username is davehigton, e-mail domain is dsl dot pipex dot com. |
Dave Higton (281) 668 posts |
Where can I find:
please? |
Jeffrey Lee (213) 6048 posts |
Sure, I’ll send it to you tonight.
There’s no default image per se; the default values are determined by the code starting at line 877 of Kernel.s.NewReset (although admittedly most of the default values are defined by the table starting at line 1063).
The StrongHelp manuals should be mostly accurate. Or you can always refer to HdrSrc.hdr.CMOS Also it’s worth pointing out that the CMOS addresses listed in the StrongHelp manuals/kernel source aren’t identical to the actual addresses within CMOS RAM. If you look in Kernel.s.PMF.i2cutils you’ll see a whole mass of scary code that performs address translation on the addresses. Looking at MangleCMOSAddress, it looks like the translation will work as follows: (bearing in mind that E2ROMSupport is TRUE) Logical Physical &00 ... &BF &40 ... &FF &C0 ... &EF &10 ... &3F &F0 ... &FF &00 ... &0F &100 ... max &100 ... max So if you’re going to interface your caching code with the HAL then you’ll be receiving physical addresses, not logical ones. But then again if you were interfacing with the HAL then you shouldn’t need to know the defaults at all, since the OS should identify that the CMOS checksum is corrupt the first time you boot. |
Dave Higton (281) 668 posts |
Hmm, I was hoping to be able to introduce this code in two stages: (1) read only, so that a CMOS RAM image can be read in from a disc file put there manually by the user; (2) the full read and write job. The point of stage 1 is that it permits people to manually change the CMOS contents (though computing a new checksum might be a chore), but stage 2 is much more difficult than stage 1, and therefore it will arrive significantly later. The filing system operations for all the posible cases of the write operation are far more involved than for read. |
Dave Higton (281) 668 posts |
I’ve only ever seen documentation for the contents of 256 (240) bytes of CMOS RAM, yet I understand that the Iyonix has 1 kiB, and your references above show that sizes up to 8 kiB exist. Are the contents beyond the first 256 bytes documented? Are those bytes used, even? Just curiosity; once my stuff is going, size will no longer matter :-) |
Jeffrey Lee (213) 6048 posts |
It’ll be easy enough to get hold of a safe set of default values for use with stage 1 – just write a dummy NVRAM implementation for the HAL, boot the machine to the supervisor prompt, and use *save to grab a copy of the HAL’s NVRAM cache.
Somewhere in CVS there’s a module which allows (tag,value) pairs to be stored in CMOS RAM, using arbitrary strings as the tags. I think it was used in some of the STBs/other non-desktop machines. It might be worth resurrecting that module as a way of managing the (comparatively) unlimited space that your code would provide. But as far as desktop machines go, I don’t think any of them have used more than the standard 240 bytes of storage. |
Jan Rinze (235) 368 posts |
Dave, I am very curious at your progress with the SD/MMC card interface. Could I have a copy of the current sources and help out cleaning it up? Also I am very willing to beta test for you too. Jan Rinze. |
Steve Revill (20) 1361 posts |
Yes, that’ll be castle/RiscOS/Sources/HWSupport/NVRAM which was used a lot in the NC and STB days. It’s a nice way of doing things. Note: there is a functional specification for this module in its Docs directory. |
Dave Higton (281) 668 posts |
Certainly. E-mail me your e-mail address – mine’s at the top of page 3 of this thread. Do you want it when it’s nearly finished, or do you want the ongoing versions that test little aspects of its operation? Btw right now it’s in the middle of a long series of edits, so there won’t be anything new fit to send until at least tomorrow. I could send you what I sent Uwe a couple of days ago if you wish.
Thanks – that will be helpful. |
Dave Higton (281) 668 posts |
Another quick update. I’ve got code working that matches a long or short filename in the root directory on a FAT16 or FAT32 formatted SD or SDHC card. It’s only a small step from there to read the contents of a file. I’ve asked my co-conspirators for some help towards merging it into the HAL. The code to read a file that’s already there will be in the region of 10 kiB when all diagnostic printf statements are removed. There appears to be plenty of unused space in the HAL. I have no idea how much bigger the code will be when it gains the ability to create, write and rewrite a file. |
Jeffrey Lee (213) 6048 posts |
A word of advice for sticking C code in the HAL - You’ll want the following line in every file, or in some shared header, and preferably before any code: __global_reg(6) void *sb; This will tell Norcroft that register 6 (aka v6, aka R9) must be left alone because it points to the HAL workspace (aka that big struct in HAL.OMAP3.hdr.StaticWS). You also won’t be able to use any non-const global/static variables, since the compiler/HAL isn’t set up to allow for any storage space for them to be automatically allocated. Instead you’ll have to add everything to the struct in hdr.StaticWS, using either a bunch of assembler functions to get pointers to the data or some trickery to allow a C struct to be produced from the contents of StaticWS to allow the C code to get at the data directly. And of course there’s no malloc or free, and no division functions, although you should be able to take the division functions from the SCL if you find you need them. But on the bright side you should be able to use (v)printf and (v)s(n)printf, since the OMAP3 HAL contains a copy of the CLib file from the Iyonix HAL.
Size shouldn’t be much of an issue – there’s no reason why the HAL can’t be made bigger if you run out of space. Have fun :) |
Dave Higton (281) 668 posts |
I’ve been issuing some HAL_NVMemory calls on my Iyonix. The surprise is that the size returns 2048. |
Dave Higton (281) 668 posts |
In fact it’s even wierder. Here’s the code:
REM HAL calls for NV RAM ON ERROR PRINT REPORT$ + " at line " + STR$ERL: END r0 = 0: r1 = 0: r2 = 0: r3 = 0: r4 = 0: r5 = 0: r6 = 0: r7 = 0: r8 = 0: r9 = 0 NVMemoryType% = 23 NVMemorySize% = 24 NVMemoryPageSize% = 25 NVMemoryRead% = 29 REM Read the NVMemory's type r0 = 0 r9 = NVMemoryType% SYS "OS_Hardware", r0, r1, r2, r3, r4, r5, r6, r7, r8, r9 TO type% PRINT "NVMemory's type is " + STR$~type% REM Read the NVMemory's size r0 = 0 r9 = NVMemorySize% SYS "OS_Hardware", r0, r1, r2, r3, r4, r5, r6, r7, r8, r9 TO size% PRINT "NV memory's size is " + STR$size% + " bytes" REM Read the NVMemory's page size r0 = 0 r9 = NVMemoryPageSize% SYS "OS_Hardware", r0, r1, r2, r3, r4, r5, r6, r7, r8, r9 TO pagesize% PRINT "NV memory's page size is " + STR$pagesize% + " bytes" IF size% <> 0 THEN REM Read the memory DIM buf% size% FOR i% = 0 TO size% buf%?i% = &33 NEXT i% r0 = 0 r1 = buf% r2 = size% r9 = NVMemoryRead% PRINT "About to read " + STR$r2 + " bytes" SYS "OS_Hardware", r0, r1, r2, r3, r4, r5, r6, r7, r8, r9 TO read% PRINT STR$read% + " bytes were read" REM Save the bytes as a file SYS "OS_File", 10, "CMOS", &FFD,, buf%, buf% + size% ENDIF and here’s the result: NVMemory's type is C02 NV memory's size is 2048 bytes NV memory's page size is 16 bytes About to read 2048 bytes 0 bytes were read What am I doing wrong? |
Jeffrey Lee (213) 6048 posts |
You’re not setting R9 to NVMemoryRead% :) |
Dave Higton (281) 668 posts |
Yeah, got that, I’ve edited the programme and the post, now I’m getting 0 bytes read… |
Jeffrey Lee (213) 6048 posts |
After a quick look at the Iyonix HAL source, it’s quite obvious what the problem is – the Read/Write calls aren’t implemented, because HAL_NVMemoryType is reporting that the memory is on the IIC bus. You should use OS_IICOp should be to access it, using the address returned by HAL_NVMemoryIICAddress I’m somewhat surprised that OS_Hardware didn’t return an error – the OS is capable of detecting which calls are and aren’t implemented, so really it should have told you about it. |
Jeffrey Lee (213) 6048 posts |
And looking at a comment in the HAL a bit closer, I think you’ll find that because the kernel only works with EEPROMs with 8 bit addresses, the 2K EEPROM used in the Iyonix is spread over 8 IIC addresses (&A0-&AE, bearing in mind that the bottom bit of the IIC address is used as the read/write flag). I.e. bytes 0-255 are at &A0, 256-511 at &A2, etc. |
Dave Higton (281) 668 posts |
I posted just before I shut down and went to bed. Just as I was walking out of the computer room, I realised what it was. But thanks anyway.
All the calls have to be implemented, AFAIUI. The call did return an error, by returning 0 bytes successfully read. |
W P Blatchley (147) 247 posts |
Reading through this thread, I was wondering if the NVRAM API is going to be complete enough to eventually build a complete filing system on top of so that RISC OS can in the future access the SD card as a regular disc? I know that’s not where you’re aiming at right now, Dave, but I was just thinking ahead. It occurs to me that NVRAM access is expected to be slow, and that might cause problems when using this API for disc-like storage? Just thinking out loud… |
Jeffrey Lee (213) 6048 posts |
For some of the HAL APIs, yes, but not for the NVRAM API. The HAL_NVMemoryType page describes which calls you can expect to be implemented for each memory type. And it only returned with R0 with 0 because R0 was 0 on entry :) Looking at things a bit closer, I think the reason OS_Hardware didn’t return an error is because there’s no easy way to differentiate between “unimplemented” HAL calls (which simply point to a MOV pc,lr instruction) and HAL calls which are implemented but happen to do nothing (which as you’d expect, may simply point to a MOV pc,lr instruction). This might be something worth changing in the future if it causes more problems. |
Dave Higton (281) 668 posts |
I can see that it would eventually be useful to give file access to an SD/SDHC card. It would be possible to re-use the NVMemoryRead and NVMemoryWrite functions to use a named file, by passing in a uint of FFFFFFFF (read a file, write a file) or FFFFFFFE (find if a file exists) as the first parameter Those values are well out of range of what could reasonably be used for NV memory addresses. The file name (a full path name) would have to be passed in as a fourth parameter. It would be neater to use extra function calls. They wouldn’t be numbered contiguously with the existing functions, but that doesn’t matter. |
Jeffrey Lee (213) 6048 posts |
I don’t think the idea is to reuse the NVRAM API for building a filesystem (or at least I hope that’s not what Dave is planning!). Instead I’d expect the SD/MMC filesystem to be implemented properly, i.e. with a module which uses DMA, interrupts, etc. But then you’d have the issue of conflicting accesses between the filesystem driver and the NVRAM code (especially if RISC OS is allowed to mount the partition containing the cmos.dat file). This is a slightly tricky issue to solve, but I think the only real way to do it would be to give the HAL NVRAM code a back door into the proper SD/MMC driver. The SD/MMC driver would then take over the job of caching the NVRAM contents, and any write operations (which could, at least in theory, occur while FileCore is busy) would be cached. A callback would then be used to trigger a sequence of standard file operations to write the updated data to the card. Some extra code may also be needed to make sure any pending NVRAM writes get flushed out before a reset. Preferably this would be done within HAL_Reset itself, since at that point it’s guaranteed that you won’t be receiving any more NVRAM write requests from RISC OS. |
W P Blatchley (147) 247 posts |
With the SD/MMC driver in the HAL, and the SD/MMC filesystem in RISC OS, presumably?
This would be the HAL calling back into RISC OS, wouldn’t it, or have I misunderstood? There’s no provision for this at the moment AFAIUI. Any ideas about how it would work? |
Jeffrey Lee (213) 6048 posts |
Perhaps. There are no hard and fast rules about how much of a driver should be in the HAL and how much should be in RISC OS. But in this case it would probably be easier to have the entire thing in RISC OS, except for a simple HAL device that reports the controller address, IRQ, DMA request ID, etc. Basically the same way as the NIC & video devices work. After all, there’s not much point worrying about inventing a generic API for interacting with SD/MMC controllers if there’s currently only one machine which uses an SD/MMC controller. And putting complex code into the HAL can be a pain anyway, due to the low-level nature of the HAL (very limited interaction with the host OS, can’t use interrupt driven I/O, etc.)
It would be quite simple, really – the SD/MMC HAL device would just have a function which RISC OS (i.e. the SD/MMC module) would call to set the NVRAM write function. Then inside HAL_NVMemoryWrite, if the write function has been set then it simply calls that function instead of using the internal code. Since all HAL components store their data in the HAL workspace it’s easy enough for variables to be shared between components (e.g. that’s how the Audio DMA packet size hack works) And, for reference, there are a small handful of functions available to allow the HAL to interact with RISC OS. But those are obviously designed for very specific purposes rather than anything generic/extensible like a SWI interface. https://www.riscosopen.org/wiki/documentation/pages/RISC+OS+entry+points+from+HAL |
Sprow (202) 1158 posts |
The FAT format allows sectors to be set aside for system use (the simplest example being a boot sector, but others can be reserved by setting the appropriate counts in the BIOS parameter block). That would allow the CMOS to be hidden away from prying eyes (and stop it being deleted!), and allow a filing system to coexist, and be preserved when plugged into Windows PCs, and avoid the need for the HAL to have to scan directory listings – it would reduce to just reading the BPB and fetching (or writing) that one sector. |
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 ... 18