PipeFS and OS_GBPB 10
Thomas Milius (1471) 64 posts |
PipeFS is useful in general but limited and has a stupid bug as I had to detect today: The PRM recommends an OS_GBPB 10 to obtain the actual size of a pipe if case of reading to avoid blocking of a task. This it itself is really the wrong concept. It would be nice if PipeFS would support OS_Args 9 operations with non blobking options like DeviceFS and special parameter size to allocate buffers larger than 256. Sound perhaps a bit silly for PipeFS is in general swapping a task out in case that there is no data or too much data. But this has fatal consequences at modules or applications. Of course if knowing that you are using PipeFS you could program around it using OS_GBPB 10 but in general in would be much nicer to control this transparent to the application by special parameters or OS_Args 9 support. However even with OS_GBPB 10 there is a problem. I wrote a module which splits up an USB channel into two pipes and yes I have to take care not to write more than 255 bytes into one pipe. Therefore I am using OS_GBPB 10 for exactly one of the pipes and I used therefore 1 for R3. Everything is working fine if only one pipe is open at the time. But if opening the second pipe parallel the SWI will fail and return me 0 objects in R3. If I am now changing R3 at entry to 2 (Maximum can be only 1 for I am using no wildcards in the names) everything is fine and I shall obtain my object number 1 in case that the pipe already exists. |
Sprow (202) 1158 posts |
It is legal for R3 to return 0 (meaning on this call no matching objects were found), you must call again with the magic key from R4 until you get something. See PRM 2-73. The following program illustrates this 10DIM buffer 256 11t1=OPENOUT"Pipe:test1" 12t1=OPENOUT"Pipe:test2" 20restart=0 30REPEAT 40 SYS"OS_GBPB",10,"pipe:",buffer,1,restart,256,"test2" TO,,,read,restart 50 PRINTread 60UNTILrestart=-1 >RUN 0 1 0 |