(Bush IBX port) Debugging VIDC VIRQ POST test
Phil Pemberton (7989) 71 posts |
Hi all, I’ve been porting the IOMD branch to the Bush Internet TV set-top box as a fun side project. As of this evening I’ve got enough of the IOMD HAL working that it’ll boot. Sadly it fails the VIDC VIRQ self test: The key difference I should note about the IBX and a RISC PC or A7000, is that the IBX doesn’t have a video VCO. Instead it uses the 32MHz I/O clock (RCLK) for VIDC. This might be related to the test failing…
Looking at the code, it seems like I’m getting a Vsync rate of 0xCFAE 2MHz ticks = ~37.62 Hz, which is about half of the ~60Hz (33,333 ticks +/- 2000 ticks) it’s expecting. No surprise that it’s failing the test. The POST test’s measurement is definitely correct — I checked the frame rate of the video output (via the Csync output) using an oscilloscope and got the same number. I’ve not been able to figure out why I’m getting this specific count (0xCFAE, or 53166 ticks) versus the ‘ideal’ of 33,333. It seems to be about 1.6x slower. That doesn’t match up with the ratios of any of the clocks in a RISC PC or A7000 that I know of, versus the Bush box (which has a 48MHz and a 64MHz crystal). There’s provision to fit a HCLK (video clock) crystal at X3 (R32 must also be fitted) and a SCLK (sound clock) crystal at X2 (also fit R20) but neither of these are fitted in my box. Can you say cost-optimised? :) I had a dig in the old NC branch (https://gitlab.riscosopen.org/RiscOS/Sources/Kernel/-/blob/d068e40f3492db08248eb47be6c33df69c53b640/TestSrc/Begin) and noticed that the VIRQ test doesn’t run on ARM7500FE systems:
For now I’ve pulled this into my branch but checking the “STB” flag instead of “MorrisSupport” — but I was wondering if anyone knew whether this is the right way to go? I feel like a more correct fix would be to work out why the VIDC is initialised wrongly, and fix VIDCTAB so that it produces the correct video signal (PAL or NTSC?), then fix the Virq test. Anyway, questions aside — if anyone’s interested in trying out this absolute insanity…
Repos:
Be warned, you’ll ideally need to use 27C322 ROMs – primarily because this is an 8MB ROM. If you want to use 27C160s in a Bush set-top, you’ll have basically no application space, but also you need to stack two 42-pin IC sockets for each ROM. Pop the /BYTE pin (I think it’s pin 22?) out of the bottom socket. Clip the leg back on the top socket and wire /BYTE to VCC. This pin is wired to A22 on the motherboard… as soon as RO tries to boot, it’ll switch the ROM into byte mode and start getting garbage from it. You’ll also need to disable the SerialDeviceDriver as that crashes the box on boot. The CMOS apparently resets on every boot and ADFS hangs for quite a few seconds. But that’s all on my buglist and something I can fix later. |
Sprow (202) 1158 posts |
I don’t have an IBX box so can’t immediately assist with your VIDC clock question, but I’ll chip in with a few minor comments: BuildSys, Env, ADFS, Kernel, FPASC – all look like the usual steps involved with adding a Machine type, no excitement there. I’d probably question what ADFS is doing in the ROM given the IBX doesn’t have a floppy drive or harddisc, unless you’re planning some hardware butchery to add them. HdrSrc – we shouldn’t be advocating reintroducing assembly time hardware selections, such switches shout “that’s not hardware abstraction” to me, they belong in the HAL component rather than a global header file. VIDC20Video – likewise, assembly time hardware selections don’t belong here. The VCO option was hardwired just because of the origin of the source code to this module as a way to kick the can down the road since it was known STB support wasn’t needed. Such things belong in the HAL too, though that could be as simple as signalling that option through the HAL device, the key bit being that there’s only 1 VIDC20Video module rather than lots of different ones depending on assembly/compile time decisions. HAL_IOMD – it’d be nice if there was some way to distinguish an A7000+ from a Bush IBX, does the SMSC’669 have a different chip ID to the ‘665 for example? Or a peek of the podule bus to spot the modem? For the VIDC POST failure, yes, the sensible thing to do is work out why the 60Hz VGA settings didn’t result in a 60Hz flyback interrupt rather than ignoring the failure. |
Phil Pemberton (7989) 71 posts |
Thanks for the comments. ADFS (and the Econet modules etc.) are in the ROM despite the lack of appropriate hardware because I haven’t fettled the ROM build files yet. I need to include the Zip Drive drivers, but I want to get a star-prompt (over serial) first, then bring up the other hardware. Having the option of including ADFS would be good but as you say it requires hardware butchery. IDE is possibly the only mod with any real point (using some variant of Vectorlight’s A3010 IDE mod) but it’s still a mod. The TTL serial port is at least a fairly easy mod with only three wires needed! Re the hardware-specific mods, the 669 is one way of detecting the Bush box. The HdrSrc changes only affect the HAL anyway – so they could feasibly be moved into the IOMD HAL source. That’s going to change by definition with any machinetype anyway. Distinguishing the hardware is possible – the SMC 669 does have a different chip ID, and you can see that in the IOMD HAL source. My updated init code checks for a 669 and uses a different init process which the 669 needs (its I/O addresses are programmable and there are no sane defaults). I’m just not sure of the best way to pass the “machine is a Bush IBX set-top” flag back ‘up the chain’ to other parts of the OS, then check it in e.g. VIDC20Driver to set things like the VCO mode — but also later down the line, the default video mode. At the moment my build is set up for HAL debug and DebugTerminal, and gets as far as the SerialDeviceDriver loading before crashing unceremoniously. (I’ve just read that out loud and realised that it might not be crashing, but SerialDeviceDriver might be reconfiguring the serial port and killing the DebugTerminal). Video output seems to be separate-sync VGA, but I need PAL composite-sync. All stuff I need to look into… |
Sprow (202) 1158 posts |
Fair enough, I thought your module list was based on what
As you say the HAL could select at assembly time, but my bet is it’s 98% the same as an A7000+ so could do some detection and put a value into For video specifically, you can force the default mode with GraphicsV 16 so then all that’s needed is a way to signal from the HAL that it’s not a Risc PC/A7000/A7000+. The Video device API is a bit of an untamed beast because nobody’s ever put in the hours to design a proper API which covers the plethora of different video controllers, so in practice once you’ve found the one you know about (
Yup. Take a look at the debugging guide which recommends knocking out the ROM serial drivers if you’re trying to use the same UART for HAL debug. Some boards have loads of spare UARTs so can set aside one for HAL debug and present the rest for OS_SerialOp but not in your case. |
Phil Pemberton (7989) 71 posts |
D’oh – that makes sense. Sadly the module it recommends knocking out (DualSerial) isn’t in the Components file for IOMD builds, so I’m not 100% sure what I need to disable. Regarding the video driver, I don’t think there’s really a need to have a completely separate video device driver. There’s a lot of commonality between “VIDC20”, “VIDC20 in an ARM7500 or FE”, and “VIDC20, in a 7500FE, in a Bush set-top”. It’s really just whether we use RCLK or VCLK instead of a VCO, and whether the driver should default to generating PAL and/or NTSC instead of VGA. My current to-do list looks like this:
p. I’ve also noticed that this is 90% of the way to something that’d run on the NC family – the main missing part would be whatever IR keyboard support is needed there, and support for the Chrontel composite video chip. Oh, and a network or modem driver. I doubt anyone’s as mad as I am though, so I don’t expect it’ll happen. |
Colin (478) 2433 posts |
Is the Serial module in there if so remove that. There are 2 Serial modules ‘Serial’ and ‘Dualserial’ |
Phil Pemberton (7989) 71 posts |
The only serial modules in the Components file are:
For my next build I’ve disabled all three… |
Sprow (202) 1158 posts |
As Colin mentions it’s Serial, as in the component whose sources are in RiscOS/Sources/HWSupport/Serial, but don’t forget the ModuleDB gives things friendly names, which in this case is SerialDeviceDriver. SerialDeviceSupport (which points at RiscOS/Sources/HWSupport/SerialSpt) just provides a bit of BBC Micro-esque emulation, you could leave that in since it doesn’t poke registers itself, and SerialMouse goes via DeviceFS/SerialOp so also doesn’t poke registers. But I’ll admit I usually just comment out the whole group that start with the word “Serial” too. DualSerial’s a bit of a misnomer too. It’s really PolySerial as it definitely supports more than 2 ports.
The definition of the VDU HAL device is in the Kernel as it happens. You’d probably need to conjure a define with VIDC20 specific name space to make it clear. As I said earlier the VDU HAL device isn’t very well thought out.
Yes, it’s mildly inconvenient the Philips RTC has a base address of &A0 as does most of the world’s EEPROM. The HAL RTC just needs to return “no”, but the HAL NVRAM will need to declare whatever EEPROM there is (or at least the first 256 bytes of it). If that can’t be done with IIC probing, since it’s the HAL and you already figured out you’re an IBX200, they could just switch on IOSystemType there. |
Phil Pemberton (7989) 71 posts |
It looks like removing the Serial modules was the answer, as the Bush box just booted to its first 32-bit star prompt:
The data abort in PortManager is a weird one. The address is obviously in ROM ( I’ve done a bit of debugging on the box, and landed on this:
After some more calculating of offsets and dumping object files with Decaof (there has to be an easier way…) I found that this assembler code maps to this line of code: https://gitlab.riscosopen.org/RiscOS/Sources/HWSupport/PortMan/-/blob/master/c/module#L340 I can’t figure out why the write to
Ah! I didn’t realise that. As in, I noticed the source code wasn’t where I expected it, but didn’t realise ModuleDB was how that was handled. I’ll have to take a closer look at the SerialDeviceDriver at some point anyway, as the RCMM keyboard is tied to the second serial port and will need to talk to it. |
Sprow (202) 1158 posts |
Oh yes it can! IOMD’s physical address hasn’t changed, but its logical address certainly isn’t at &3200000 any more – that’s now somewhere near the bottom of RISC OS 5’s 512MB application slot (and probably not mapped in if you’ve only got an 8MB SIMM). PortMan looks like it’s in a bit of a sorry state, having had some other STB support grafted onto it in 2004, but that also just merrily peeks magic memory locations. I suppose it predated the HAL GPIO device so didn’t have much choice, but those chickens have come home to roost now you want to awaken the 7500FE code path after 21 years of peaceful sleep. I’m not sure what PortMan brings to the party so would suggest a 3 step plan:
|
Phil Pemberton (7989) 71 posts |
I’ve actually got it fixed – I may have pushed the patched code to Gitlab not long before you posted. PortMan is a GPIO manager, it provides SWIs which applications and modules can use to turn hardware on and off. As you say, someone’s grafted on support for a Conexant set-top box processor which had PCI. The 7500FE obviously doesn’t, so the wag who added Conexant support clearly decided to use that as the differentiator… in this post-HAL world, it probably should have asked the HAL what hardware it was running on. I’ve added some code to the ARM7500FE path to make it grab the IOMD base address using That revealed another bug in PortManager – a “Buffer overflow” error at the end of initialisation. It turns out that PortManager loads the GPIO definitions from a file called The I’ve noticed that when the Desktop starts, I lose the serial port output. That’s a bit annoying — so to try and stop it, I commented out the Desktop module in the Components file. Imagine my surprise when the Desktop still tried to start! I’m not sure what was going on there, but given the NVRAM still wasn’t working (as I’d mistyped LDRB as LDR), it was easy enough to change the |
Phil Pemberton (7989) 71 posts |
This one’s got me puzzled. Here’s a block of code I’ve added to
When I build this, I get a “Warning: UAL syntax in pre-UAL A32 code” warning from Objasm, on the “ It’s only a warning and not an error, but is this anything to worry about? |
Stuart Swales (8827) 1357 posts |
Just use ye olde LDREQB syntax. |
Phil Pemberton (7989) 71 posts |
That’ll teach me to use the ARM website as a reference guide for instruction syntax! I completely forgot about the old syntax that had the condition code before the byte qualifier… |
Phil Pemberton (7989) 71 posts |
(This is quite long, again… purposely, as I’m trying to get all this documented for anyone mad enough to try and do a settop port in future… there’s a summary at the end) Righto, starting on the video driver now. (and this is likely to lead to the The current state of play is – video is being generated, but it’s with separate syncs and likely whatever the default mode it’s detected is. MonitorType is Auto, so the kernel and HAL are probably trying to sense the monitor type from the ID pins, which isn’t going to work very well because a TV on a SCART port doesn’t have them. Monitor type translation is left down to the kernel. This is done by This is the standard list:
The parameters are (from left to right):
On the NC branch, if
Or if there is a PAL/NTSC bit:
The obvious way to fix this is to claim Digging deeper, the code for actually reading the monitor type bits lives in the As things are, I’m not even sure that the ARM7500/FE systems (A7000/A7000+/Odyssey) have more than ID0 connected to the 7500 chip’s “CLINES” 8-bit I/O port… I don’t have a motherboard to check, and I don’t think there’s a schematic available outside of Acorn, circa 1997, when the workstation division was shuttered. Wrapping all this up:
I also noticed that
So this might need some changes too. Otherwise it might explain why the Virq test is failing: because the clock config register setting is for a 24MHz oscillator and VCO, while the 7500FE uses 32MHz. |
Phil Pemberton (7989) 71 posts |
Just having a quick look at the VIDC20 Virq test. I think I can see why it’s failing. First I need to explain the “VCO start fix”. This doubles the frequency of the VCO, then applies a divide-by-two prescaler in the VIDC. For some reason this makes the VCO more reliable? The comments in Moving up the code to the register settings:
Plugging these numbers into a calculator, we can figure out the interrupt rate:
Or with the ARM7500/FE’s 50.286 MHz VCO clock:
The Virq test measures against a 2MHz clock so for these we should see results of:
The Virq test will accept anything from When it fails on the Bush box, it reports a count of The problem arises when we switch to the RCLK clock:
On the ARM7500FE systems (A7000, Bush IBX), RCLK is 32MHz, divided down from a 64MHz crystal. The VIDC RCLK input is the 32MHz I/O clock, and we have the divide-by-two prescaler enabled too.
But what if we turn off the prescaler?
That’s a little closer to 33,333 but it’s still out of range. I don’t think there’s a way to generate the 640×480 1bpp mode from the clock frequencies we have available to us, without editing the mode definition. Meanwhile I’ve spent half the evening trying to figure out why I couldn’t access the module workspace (specifically the device-specific flags in the descriptor) from |