core only ROS build?
Matt (481) 28 posts |
Has anyone produced a ‘core-only’ build suitable for smaller SoC’s or dev. boards with no video? With an expected 20 million connected devices by 2020 in the Internet-Of-Things, surly we have the ideal OS for such devices… Matt |
Wouter Rademaker (458) 197 posts |
It would be single tasking, so of little use. |
Matt (481) 28 posts |
I think ‘of little use’ is a bit strong – effectively you’d be left with a miniture BBC Micro ( once setup with a terminal ) which has been of much use over the years – especially for monitoring sensors. The sort of applications used in the Internet-Of-Things ( sensors, relays, etc ) would be very simple, and would most likely be single tasking – besides any needed background tasking could be easily taken care of by modules as would be interrupt related anyway – i.e packets arriving, I/O etc. However I agree a more dedicated OS would be of more use, however I don’t think it currently exists. Perhaps (a new) ‘Embedded RISC OS’ could be the way forward – we are talking a huge future market, so this should be taken seriously. Matt |
nemo (145) 2556 posts |
True and false respectively. I’ve long thought that the task management and WIMP GUI aspects of the Window Manager ought to have been separated, with the task stuff going into TaskManager (and the UI bits of that back into WindowManager). The fact that you initialise tasks with WindowManager even if they don’t have a window but ask TaskManager for the list of current tasks is… odd. Perhaps a new module, “Task”, which Wimp_Initialise, TaskManager_EnumerateTasks etc would proxy to. (Yes, I know TM_ET is not quite the same as Task_Enumerate would be… I’ve exploited that subtlety before). |
Dave Higton (1515) 3534 posts |
Well… my central heating controller runs headless on an A3010. It definitely multi-tasks and indeed requires multi-tasking, because other apps (e.g. Alarm) are required to run. If I ever get around to making it into a home automation gateway, even more apps will need to run, multi-tasking. I’m not sure what benefit there is to be had by removing the Window Manager. I’m certain that apps that normally have a GUI, e.g. Alarm, would need to be runnable anyway. Perhaps the simplest solution is to leave everything there, and configure the screen to a low resolution so as not to consume much RAM. There has to be some way of dealing with Wimp error boxes. Arguably, leave the Window manager there, run at a reasonable resolution, and supply a VNC or RDP server for remote management. |
Rick Murray (539) 13850 posts |
I’ve been saying this for years – usually by picking holes in my ARM based PVR that overflowed its 16Mb flash just to record and play back video with a fancy looking UI. RISC OS ought to be much more cost effective in being smaller. Limitations, however:
Don’t get me wrong, I would be totally behind an embedded platform running RISC OS; there has to be more than a hundred types of Linux; but there are some issues that would need to be considered. As for a headless OS, I believe the system can accept input from serial and write VDU output to serial – cam this be altered (vectors?) to speak telnet instead? |
nemo (145) 2556 posts |
Does UnixLib not provide that? (I don’t know)
TaskWindow. Note that the TaskWindow module does not require any particular UI (and would be happy with Telnet to use your example) but does require task switching… hence my comment.
Nor is anything else, by definition. What you mean is it’s much easier to write system level code (and in fact you’re more often forced to) in RISC OS.
I’m pretty sure I always forget to do that. I scowl though, is that OK?
Yes.
Yes. |
Steve Pampling (1551) 8172 posts |
A combination of a couple of utilities NoError (IIRC) and WimpLog (Theo’s stuff) |
Rick Murray (539) 13850 posts |
I have highlighted the important word. UnixLib is an add-on. Still, we don’t want to be POSIX compliant. It basically describes a generic Unix right down to how the filesystem looks and a bunch of commands that should be implemented, one of which is “awk” (sounds like an animal noise – awk! awk! awk!). What is the OS specification doing telling you that you need a pattern processing language? [http://pubs.opengroup.org/onlinepubs/9699919799/] I notice also that the description of the directory structure [http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap10.html] glosses over the /bin /sbin /usr/bin /usr/sbin mess.
This is what happens when you have a phone with a broken text-entry UI (Sony, I’m looking at YOU – both the Xperia Mini Pro and the current Xperia U are the same) that doesn’t necessarily show you what you are writing but might display some other part of the text… and a tappey-swipey input system that takes guesses at what you wanted to say. That said, maybe the phone is right? Playing with R14 in SVC mode is like attacking it, and yeah, when the thing freezes and you have to reset, scowling is involved. And lots of words you don’t say in front of children or grannies. Doesn’t the current layout of RISC OS imply that TaskWindow goes hand in hand with the Desktop? I’m not aware of any other task switcher option.
Your reply suggests to me that you may be aware of an interrupt-driven telnet interface to replace VDU/kbd. Have I missed something interesting? |
Theo Markettos (89) 919 posts |
There’s code in HAL_DebugTXRX to essentially implement the ‘core only’ mode – boot to a Supervisor prompt using the serial port rather than video, which is used in board bringup . Any kind of ‘cloud image’ would still need to be targeted at a particular platform.
There’s a good reason for that. /usr may be on a network share or otherwise unmounted, so you need somewhere to put enough commands to be able to mount it and get at the rest of your files. Think discless machines with a tiny amount of ROM that’s enough to contain /bin and /sbin but not all the packages in the world. And sbin is for things that only root can use – hides them away from normal users who can’t run them. A multi-user networked OS is a different ballgame.
If it’s used by the basic system, you need it. Try running a RISC OS machine without BBC BASIC and see how far you get (“BASIC? Isn’t that a 1960s toy language…”) |
Steve Pampling (1551) 8172 posts |
Some say it’s an abreviation of AWKward. If you try working with it, you find the view is apt and you are in fact mostly working against it. (Or you are a super geek and its all perfectly normal) |
Matt (481) 28 posts |
Agreed RISC OS is a little way off running Apache, but we’re a lot closer to running a small footprint webserver than running a modern browser – I can’t see Silverlight / Flash on ROS in the near future.. As for non-deskop multi/backgroud-tasking I imagine an event manager that handled all the interrupts (safely) and provided an easy to use API for setting up handlers etc. would go along way. I guess a new messaging system would also be needed as the existing system is provided by the window manager. Although back to my original question – any size estimates for a core only RISC OS? I imagine a 256k ROM (with space for software) would be feasible. Changing the subject a little I believe the modern ARM’s are too powerfull for the IOT – can you imagine how small / low power an ARM2 (with it’s lightening fast STM/LDMs and no cache issues ) could be if made today?? |
Rick Murray (539) 13850 posts |
Aren’t people trying to get rid of this stuff on the desktop? ;-)
I think this area is sewn up with the existing hardware. The thing is, low power devices don’t need fast processing. My bread maker uses an 8051 clone clocking…well, it is a 4MHz oscillator… The PVR is ARM (200MHz) and DSP (100MHz) combo and while the UI is laggy, it can record 640×480 in realtime with AAC audio. Other embedded devices would fall between those extremes. Oh, and it would need to be a 32 bit ARM2 with onboard MMU and some I/O and timers and such. These devices already exist, look at other members of the OMAP and DM320 family. The only thing is, the devices that run a good ARM with basic graphics (if any) aren’t cool for running Linux upon. The Pi is basically a hardcore video chip with an ARM thrown in to tie the knots….. Yeah, I think 256K ought to be doable – RISC OS 2 was only 512 all in wasn’t it? |
Rick Murray (539) 13850 posts |
You may find timing to be… interesting. Having written emulators, you ought to know that a bunch of things running async is really seriously hard to get accurately correct when it is a single thread of a single program doing the task of emulation. I’d stick with an SoC – simpler and they already exist. ;-) Just out of interest, why did you pick the MIPS 220? |
Colin (478) 2433 posts |
Simpler for who? After you have farmed off all of the slow speed interfaces to another chip and written the interface on that chip to communicate with it. How is that different to me as the user of that chip to programming the same devices on a SOC. For you its different you wrote it you understand it but for me I need the documentation to work out what to do. I have just as much chance of being able to work out how to use a SOC interface as your interface. I have the same problem with software libraries. I write the library what it does is obvious, other peoples libraries don’t always work in the way I think. |
Dave Higton (1515) 3534 posts |
Bit-banging is of very limited usefulness. It doesn’t lend itself to multi-tasking. It ties up the CPU in a task that must not be interrupted. Worst of all is bit-banging input, where you have to tie the CPU up in a permanent listening task. And you can completely forget high speed USB. In reality it’s enormously better to have appropriate I/O subsystems that do their low level stuff autonomously. Those tasks can carry on simultaneously. But we all agree, without adequate documentation we’re stuffed. |
Rick Murray (539) 13850 posts |
Exactly. While it is fun to bit-bang stuff, it doesn’t scale. If you are handling SPI for SD cards, USB, Ethernet, a serial port, and IIC – and you want to run a fast IIC, what do you do? Ignore the other stuff while you contentrate on the IIC? What if something is busy-waiting but you have to keep the other systems alive through this? I think the code to try to cope would become quite hellish in short order. Oh, and let’s not forget the amount of processing power required to bit-bang rapidly, due to the amount of code, the variance in what a processor is doing at any one time, and so on. Quote:
If a ~700MHz machine cannot reliably bit-bang 9600bps serial… This isn’t to say it cannot handle running I/O at those sorts of speeds, I think the main problem here is the timing is completely out of whack. In the example above (the IIC stuff), you realise that dealing with IIC while there is ongoing serial communications means a very good chance the serial comms will get trashed. It is very time sensitive, as explained in the quote. Perhaps a way to look at the SoC is a dozen tiny processors all merrily bit-banging their own protocols, leaving you to respond to tidily arranged (if not always logical) messages. A good example of the usefulness of this is the ADLC that was for Econet. Each bit of each network packet interrupts the computer, however once the packet has been determined as not being relevant to us (ie not our station number), the ADLC itself can discard the remainder of the packet without bothering us. This is the sort of advantage we can levy when using prebuilt I/O subsystems – let them do as much as possible so we don’t have to! |
Theo Markettos (89) 919 posts |
In case anyone wants a real ARM2, the Amber core runs at 80MHz on a Virtex 6 FPGA. I imagine it could probably be squeezed a bit further. However the real equivalent to an ARM2 today in terms of size and complexity would be a Cortex M0 at about 50MHz (12K gates, >0.01mm^2, 4uW/MHz in a 40nm process). |
Rick Murray (539) 13850 posts |
Certainly – one would imagine that the point of having intelligent controllers is that they are, you know, intelligent. In this day and age, it would be nice to be able to think that talking to an SD card is more or less: Is an SD card present? Yes? Okay, read the descriptor block so we know how big it is and such. Okay, good, not read sectors 0 through 7 so we can work out partitioning and format. Anything more complicated than that, you’d wonder why you couldn’t just bit-bang SPI out of some GPIOs and wrap it all up in a few higher level routines.
I am a believer in having simple hardware and putting the smarts in software. I did a project many many years ago to implement a serial port on my A3000 using a MAX232 wired to the parallel port. I was able to, with interrupts disabled, bang out 1200bps moderately reliably but it took a lot of scouring the ARMinstrs document and some rather peculiar code to get the timing exactly right. I reckon I might have been able to push the code to 2400bps (it was a little erratic at that speed, but mostly worked). More recently, I have only worked with bit-banging IIC which has the beauty of being a clocked protocol so you can run it at whatever speed the computer is comfortable with timing jitter not being a concern so long as the 4.7uS is respected.
This is because few people bother with assembly these days and some compilers are… lacking. So essentially you are trying to optimise for the processor and the processor is trying to optimise for you.
I understand that a 700MHz processor ought to be able to bang 9600bps serial. Certainly, it has been proven that C code with a good library can bang GPIO at around 20MHz (and possibly a lot more if you use assembler and disable interrupts). However the issue here is not whether or not the processor is capable of doing this task, but rather can it do it competently. It is not; not through an inability to run at such speeds, but rather through an inability to do so without timing jitter. There is something important to remember about serial comms – it is NOT an ack/nak protocol. It absolutely depends upon accurate timing. It only needs to maintain this for a single frame as it can sync off the start bit, but for that period it must not jitter. Sadly, if you are running a software serial and an operating system, you cannot guarantee that the timing will be accurate. It will work a lot better for clocked and ack/nak protocols. Bit-banging IIC is effortless. However a processor can’t reliably handle a time-sensitive protocol unless that is all it does. Dedicated hardware is much better at this task. If you don’t believe me, you can get 16550 clones running up to 230-460Kbps; try to imagine the sort of processing power you’d need to do that in code. Will your IRQ latency be sufficient or would you need to continuously sample? You’d need to sample at 2-4 times the data rate to be sure to catch glitches. You would need to detect bits, set yourself up to accept a new byte, do the bit processing, store the byte, and all the while every single time taking the same number of cycles so you don’t drift. ARM timings for the Cortex-A8 are on ARM.com. Curiously, the ARM ARM (thin white and big blue) don’t contain a table of timings anywhere. Hmm! We’ll all agree on one thing though – hardware without proper documentation is about as useful as two tampons tied together would be for acting as a telephone… |
Rick Murray (539) 13850 posts |
Hmm… Within C, and I presume with Linux active, the maximum they could push the GPIO on the Pi to was 21.9MHz. [source: http://codeandlife.com/2012/07/03/benchmarking-raspberry-pi-gpio-speed/ ] I did laugh at the 3.4kHz using the shell (!), and it was “interesting” that all these different languages and libraries and there was no pure assembler. Of course, bit-banging custom VGA can be done with much lesser hardware than a ~700MHz Pi. Like, say, a 20MHz microcontroller [ http://www.linusakesson.net/scene/craft/ ]. However, the thing is that the microcontroller has the luxury of being dedicated to just the one task. A CPU has to do it in addition to everything else. This is the essence of my issue with using bit-banging. I have no problems with one processor banging one interface, but do worry about timing and contention (etc) of one processor handling multiple interfaces. Hence my comments about serial. It can be done, but it can be done only when there is a processor dedicated entirely to that and nothing else. Which gets back to Colin’s point (of May 17th), that if you have a bunch of processors providing a bunch of interfaces, haven’t you just basically duplicated the core interfaces of the typical SoC?
I think the trend these days is to make the interfaces complex with the intention that more work can be offloaded from the CPU. Think about it. Let’s just assume you are creating an interface for IIC using a piece of bit-bang code on a lowly PIC. The initial set of resources will be check bus, start, get byte, put byte, stop. Those are all that is necessary for a functional IIC implementation. Now tell me that you wouldn’t be tempted to add “get byte*s*” and “put byte*s*” and maybe some other stuff like the ability to act as a master or a slave? Then you can go a step further with “transfer in”, “send”, “receive”, “transfer out”. This will make your device really simple to use – you simply load up to n bytes into the PIC’s RAM and tell it to “send” and it will interrupt when it is done or upon failure. You’ll be talking to your device in a way not unlike calling OS_IICOp instead of start, send, send, send, (repeat x times), stop.
Damn right. As you are writing code for USB, I thought I’d chuck in randomly – the USB Mass Storage support in Android leaves a hell of a lot to be desired. My older phones, my Motorola Defy and my Sony Xperia Mini Pro both had “issues” when transferring large files via USB when the storage was in mass storage mode. The Defy would break the connection leading to an incomplete file. The Mini Pro would either break the connection, or “just reboot”. It was 50-50, and if it rebooted, there was a risk the SD card’s map would be a mess. Android would fix it, but at the risk of trashing a bunch of innocent files along the way. Once I sussed this, I found it a lot simpler and safer to dismount the card (a nice feature of the X-MiniPro was you could remove the back and slide out the uSD, it wasn’t buried under the battery or anything) and perform the copy using a regular card reader. (or maybe their documentation was lacking too?) |
Rick Murray (539) 13850 posts |
Watch the video. It’s real retro, I remember stuff like this on Acorn User cover discs. But, more than that, it is really impressive for being generated by a 20MHz microcontroller banging out a VGA signal and audio and doing the calculations necessary to draw the screen in realtime. <keanu> Whoa. </keanu> |
Colin (478) 2433 posts |
I have no experience of implementing usb and only know what I’ve read from the few documents I’ve read on the net. But it seems to me that usb has the most comprehensive documentation I’ve seen. The EHCI specification (usb 2),for example, defines a register level interface that must be adhered to for all EHCI host controllers and tells you how to feed them (the devices handle all scheduling/priorities of the lists of data fed). So a EHCI driver will (I’m going to say will because I’m naive) work with any EHCI device. Doesn’t make the programming (or the reading) easy though :-) |
Steffen Huber (91) 1953 posts |
Actually, I found this rather disappointing compared to the newer stuff they do on a CPC (4 MHz Z80 with a lousy 6845 CRT). |
Rick Murray (539) 13850 posts |
Has anybody ever tried the minimal amount of RISC OS possible on a Pi with video and keyboard? (essentially, as mentioned above, a turbocharged Beeb) I tried copying the Pi project files (as BCMmini) and knocking out many modules, but the result didn’t work – boot LED blinks as if image is being loaded, but then…nothing. [as in nothing on the screen; I don’t have a serial level convertor for outputting debug] Before anybody asks – just wondering how small I can make RISC OS while having it still remain capable (so had BCMVideo, SDFS, SDIO, RTC, BASIC, USB…). |
Jeffrey Lee (213) 6048 posts |
This is the minimal set of modules that were used to get the first few OMAP3 builds working over a serial terminal. A bit more minimal than what you’re aiming for, but it highlights the core modules that are needed in order to get the CLI working. For the features you’re after, off the top of my head, I think the requirements are: (not including the modules from the base set above)
Some sub-dependencies:
|