Using open source libraries on GitHub to support RISC OS in Wifi, USB etc.
Sveinung Wittington Tengelsen (9758) 237 posts |
With the example of https://github.com/libusb/libusb which is implemented on several platforms, would it be possible to use this so that RISC OS could support USB v.3/4? I notice that there also is an open source WiFi-management library – could this bring full WiFi support to RISC OS, currently non existent? Then, the matter of ADFS which now = Ancient Disk Filing System, unable to support several-Terabyte mostly USB v.3/4 external hard disks – are there open source libraries written in C which could possibly be of use here in 32-bit systems? Could be used for currently fogware RISC OS 64 too. Seeking less ignorance.. :) |
Steve Pampling (1551) 8170 posts |
You’re doing it wrong. Insisting on that size of storage in a client device is pretty much announcing your participation in the IT equivalent of a willy contest. 1 I’ve got a medical video project due to start soonish that requires Terabytes of storage, that won’t be local to the client devices, it will be in the storage replicated between data centres. I skimp at home, and it’s all in one room, but it is a raid array – the old setup had a disk go pop, and I replaced that and let the setup run for a while to rebuild. And bought a new unit. |
Rick Murray (539) 13840 posts |
It’s FileCore FFS. I don’t get people who want a drive that’s hundreds of gigabytes in size. My largest is 16GB, the usual is 8GB, and those sizes are important as it’s a good size to be useful (this Pi 2 has all the stuff I use and still has 12GB free!) but small enough that periodically taking a complete media image is feasible. DiscKnight is amazing and very useful (and recommended). But the best thing is to have actual backups, because DiscKnight won’t help you if the drive carks it. Use ShareFS, spread copies of important data around. Once in a while take out the SD card and make a image of it. Better yet, clone it and boot the clone (so you know the process actually works).
Exactly. When you get into serious amounts of data: reliability, recovery, and redundancy are the important words. The only people that need terabytes of local storage are pathological movie pirates that click to download everything even if they don’t ever plan on watching it. At least, then, there’s something to fill up all those terabytes.
What are the licences? Start there, because if it’s GPL… |
Paolo Fabio Zaino (28) 1882 posts |
Indeed…
LibUSB comes with a Lesser GPL v2.1, which is suitable for RISC OS sources, because the LGPL does NOT impose to release everything under GPL (as the regular GPL licenses do). It “kinda behave like the CDDL v1.1”, but still have some extra restrictions, here is a more “legal” description of the limitations on the matter of static linking with LGPL: “For statically linked libraries, a distributor must offer access to not only the library’s source code, but other information or materials necessary to rebuild the program.” This is already in place with RISC OS sources. However, introducing a license that is not Apache 2.0, may cause some issues in terms of simplicity of the RISC OS licensing, so it’s probably a question for ROOL more than for general opinions. Certainly the libUSB is a library that has been used extensively, and IIRC, FreeBSD uses it too, althought if they may still use the original version on Sourceforge, so take this comment of mine with a pinch of salt and probably best to verify. But, beside licensing and the obvious mistake made confusing ADFS with FileCore, the question is always the same: Who is supposed to do the port, testing and module creation and ensuring the library doesn’t depend on SharedCLibrary? which ( spoiler alert ) libUSB does depend on gLibC:
or
etc… Has ROOL come up with an official strategy for drivers/kernel modules “allowed dependencies” for modules that need to reside in ROM? I have no idea to be honest. |
André Timmermans (100) 655 posts |
Videos, backup of your DVDs, etc. DVDs alone are 4 to 8GB, so you quickly fills a TB HD. I don’t really care for a 64-bit RISC OS in the near future, but I would certainly to be able to access files larger than 4GB. I have a few Full-HD concerts in MPEG2 TS format I’d like to be able to view with KinoAmp. |
Steve Pampling (1551) 8170 posts |
…somewhere on the network… |
Cameron Cawley (3514) 158 posts |
My understanding is that libusb is not a USB stack on its own – it’s a portability layer that client applications can use to talk to the operating system’s USB stack. It would be a useful library to have on RISC OS in its own right, but it wouldn’t magically improve the existing USB support in RISC OS in any way. USB isn’t the best example to use here though – it’s ported from NetBSD, so it’s already based on existing open source code. There’s currently a bounty open for updating it to the latest version, so there’s already a plan – it just needs a claimant for the full task (it’s currently “part claimed”, and all work so far seems to be focused on improving CDC support).
Some of the games supported by ScummVM can get pretty big – especially the multi-CD games or the more modern Wintermute/AGS/SLUDGE games. So there’s definitely a use case for me at least.
I tend to find that network shares aren’t the best option in terms of speed. That could just be me, though. |
Paolo Fabio Zaino (28) 1882 posts |
Correct, it’s a client library: Here is the FreeBSD USB Stack: And here is the FreeBSD implementation of libUSB client: The USB support in the FreeBSD kernel is structured in 3 different layers: Lowest Layer, this layer contains the host controller driver, which provides a generic interface to the hardware and its scheduling facilities. It supports initialization of hardware, scheduling and handling of transfers. Middle Layer, this layer deals with device connection and disconnection, basic device initialization, driver selection and other device functions (look at the linked source for more details). Top Layer, this includes individual drivers for specific devices or device classes. So libUSB kinda sit on top of all the 3 layers and allow the access to USB info from user-space IIRC. I have just noticed that the original topic message mention:
So the answer here is NO, libUSB is not the code Sveinung is looking for. |
Paolo Fabio Zaino (28) 1882 posts |
Considering most RISC OS boards either use very old IDE interface or SD cards, it’s not hard to get a fast network data transfer TBH. Have you tried NFS? Also, to improve network transfer performance something to try is increase the MTU size, most switches (unless they are uber cheap) should support 3000 Bytes MTU (instead of the standard 1500). Obviously this isn’t for the completely novice user, so support for larger discs is probably preferable. HTH |
Steve Pampling (1551) 8170 posts |
I think a lot of the older network interfaces for RO machines suffered problems with speed/duplex negotiation, more recent kit does have issues with bottleneck in the stack implementation. I’m pretty sure the commonality of stack processing and the GUI task switching does not help. Implementing a stack that used a different core would improve things, but you need a bit of help from the OS at a low level. Where are we with multicore use? |
André Timmermans (100) 655 posts |
OK, you just moved your TB disc from your RISC OS machine to somewhere on the network. Anyway, I tested the network in the past since my router allow to share a USB disc on the network. I also tried LanMan98 when it was made available to all. No freeze here but some errors popping up at random. |
Rick Murray (539) 13840 posts |
When ripping DVDs, I transcode them to H.265. They run in to around 800MB-1GB apiece (including commentary tracks as alternate audio). MPEG-2 is quite an old and not terribly efficient protocol, which is why DVB-S no longer uses it (except for SD streams for compatibility).
Wow, it surely dedication to want to watch videos on a machine with no hardware acceleration to speak of. I’m guessing that’s why you stick with MPEG-2 instead of transcoding to something newer? If it’s being decoded in software, there might not be enough raw grunt to deal with the H-dot-numbers codecs?
To a filing system that may be more modern and/or better supported (and possibly journalling and such). If it supports files greater than 4GB, it’s not going to be FAT32…
The problem here is RISC OS. Across the developed world, people are streaming video via their ADSL connections. Hell, Netflix basic resolution isn’t great but it works with my lethargic 3Mbit (Prime Video, not so much, but they use a clearly inferior codec as is painfully obvious when watching the visible splotches in the dark parts of night scenes). I have, in the past, done similar using an old Pi and OSMC or Kodi or whatever it’s called. Even the original Pi 1 doesn’t break a sweat at 720p, so DVD rips are absolutely not a problem. Oh, I forgot, I’ve also shared live broadcast recordings from satellite. It’s the entire TS stream for the channel, which in the case of BBC 1 HD or NHK World can run to gigabytes per hour plus multiple audio/subtitle streams (like audio description vs normal audio, and some broadcasts have enhanced audio (I’m guessing 5.1? I don’t have equipment for that)). Seeking can be a bit glitchy at times, but there’s enough speed even in WiFi to play. Well, after all, the original video signal (including the other muxed streams) travelled something like fifty thousand miles to the satellite and back, so I can’t imagine it’s out of the capabilities of WiFi to get a single part of that stream from one side of the living room to the other. ;) It might be, simply, that all this modern stuff is just happening too fast for RISC OS and it gets its knickers in a twist, because…
Just musing out loud here – it could be a simple race condition of events that wouldn’t have happened with a 10bitT (or worse, that horrid co-ax stuff) but can arrive at the same time on faster networking technologies? Just a guess, but it certainly wouldn’t be the first time in the history of computing that “newer hardware working faster causes stuff to fail or otherwise fall over in a soggy heap”. |
Steve Pampling (1551) 8170 posts |
Maybe it all arrives and there’s nowhere for it to move on to? Y’know like it does on machines where the network card (or built in) has its own processor to handle the task of receiving and storing in a local (to the machine) memory block ready to be accessed by the other processor(s) running the decode for delivery to the GUI related bits. Multiple processors / cores have more uses than making the software that’s already faster than a human even faster. |
Colin (478) 2433 posts |
Caused by streaming over a callback while using LanmanFS in the foreground. An SMB connection is 1 socket. A file system command starts and ‘claims’ the socket while it completes the transfer. Meanwhile a streaming socket tries read data and can’t because of a transfer in progress. hence the error. Multitasking systems can just wait for the resource to become available and stay responsive RISCOS can’t. If the waiting process was wanting to read a file a better solution is possibly to return no data instead of an error but that relies on the caller not waiting in a loop if OS_GBPB returns 0. But not all filing system commands can return without error having done nothing. I think the ‘waiting for a resource to become available’ is a problem for a lot of riscos processes and why porting more up to date drivers will not cure RISCOS freezes. Take the rcomp network stack for example. I find it generally faster but it suffers from the same problems as the old network stack. |
David J. Ruck (33) 1635 posts |
Colin that is a very good example of one of the many reasons why a bit of cosmetic tinkering or a bodge here and there is not sufficient to allow RISC OS to perform the sort of duties which other OS’s can achieve with ease. A modern OS needs to be written from the ground up supporting kernel level threading. |
Colin (478) 2433 posts |
The ironic thing is that I don’t think it would be a problem if callbacks weren’t triggered but Lanmanfs has to explicitly trigger callbacks while sitting in a loop transfering data for the network stack to work. I don’t see how multithreading helps riscos. riscos runs in a single thread and it can either freeze waiting for a hardware resource to transfer data or freeze waiting for some multitasking layer between riscos and the hardware to transfer data. |
Sveinung Wittington Tengelsen (9758) 237 posts |
Is this because of the cooperative multitasking model RISC OS uses, or is preemptive multitasking afflicted with the same issue? Guess CM demands more careful restraint on the programmer side re. monopolizing resources than PM does. If this is so it would be wise to consider PM for a future 64-bit RISC OS which by necessity would have to be both multi-core and multi-cpu, both of which may require PM? |
Stuart Swales (8827) 1357 posts |
“If I were going there, I wouldn’t start from here” |
Rick Murray (539) 13840 posts |
Neither, really. If you notice, Colin is talking about callbacks. The problem is that internally RISC OS is a single process single thread single context system. The Wimp is just smoke and mirrors that runs on top (it’s why the Wimp handles application management, not the kernel, the kernel has no concept of such things). I’ll repeat what Druck said:
|
Steve Pampling (1551) 8170 posts |
That sounds like the viewpoint of many people I know with regard to networking. |
Rob Andrews (112) 164 posts |
Has anyone looked at this to give us wifi &bluetooth https://github.com/georgerobotics/cyw43-driver/tree/main/firmware |
Sveinung Wittington Tengelsen (9758) 237 posts |
To me, conserving the look & feel – “The RISC OS Experience” – even in a 64-bit system is of primary importance. The fact that legacy software must run under emulation may not implicate slow running – depends how optimized the emulator can be. That’s the Desktop – everything beneath will be a different beast altogether with little crucial functionality remaining from the old system. Guess that’s the sort of sacrifice one must make to reach the level of other operating system’s developments over 24 years, without “becoming them”. From this standpoint it’s more a matter of RISC OS’ reincarnation as a modern Operating System than its development per se, catching up with over 20 years of development of other platforms. I think this is the only strategy for RISC OS’ long-time survival unless https://www.foxweather.com/earth-space/solar-storm-wipe-out-internet happens – a new Carrington Event. |
Colin (478) 2433 posts |
To me that just enables the use of the chips on the raspberry pi which get you to the same stage as plugging in a usb dongle you’ll need a netbsd bluetooth stack and riscos api on top of that. wifi has a better chance, it would just need a module like one of the etherxx modules no new riscos api except a wifi selection app. Bluetooth like usb needs an api to develop the different bluetooth functions. Lets hope they don’t use usb as an example api. |
Sveinung Wittington Tengelsen (9758) 237 posts |
..“that is, if we were lucky!” – from Monty Python’s Four Yorkshiremen gig. (In-flight entertainment, giggle relief) It should be possible to keep the “Style Guide” in RISC OS 64’s API. If going from CM to PM and multi-core multi-threading to boot it’d probably need some tweaking/changes. Writing this new SDK with 64-bit compilation and stuff will maybe require the amount of work/“money” a 64-bit rewrite of RISC OS will require, or close to. Dunno, me sans bean-counter/bankster genes. Nice if someone with a bit of these genes can do the numbers on both new SDK and the RISC OS rewrite incl. 26/32-bit Emulator so we can see about crowdfunding part of it. |
Gavin Cawley (8462) 70 posts |
“To me, conserving the look & feel – “The RISC OS Experience” … That’s the Desktop – everything beneath will be a different beast altogether with little crucial functionality remaining from the old system. " bit like RPCEmu then? ;o) … especially now that I can play !Sangband again! |