22050 Hz audio on Pandaboard
Dave Higton (1515) 3526 posts |
I take it that DRenderer is the bit that would need updating to support USB audio devices? Whatever bit it is, something needs to provide the ability to choose between multiple audio devices. A USB audio device is an addition to whatever built-in audio device a computer has. |
Jeffrey Lee (213) 6048 posts |
Exactly what needs updating depends on who you talk to. My opinion is that it should be SoundDMA. SoundDMA will maintain a list of all the audio devices it detects (i.e. all the audio HAL devices in the system), and will have a user-friendly SWI to list them and to select which one is used. The sound setup plugin in configure can then allow the user to select which device to use, and once selected all sound output from the system will be sent to it. In the future, once we want to support multiple devices at once (whether for input or output) things might need to change a bit. E.g. we might want multiple instances of SoundDMA (one per device), or maybe have the one instance of SoundDMA service all the active devices. So it might be worth thinking ahead a bit before deciding exactly how this first-pass version will work. In this future world, I’d expect all 16bit sound to pass through SharedSound, and for SharedSound to interface with the multiple SoundDMA backends and decide which backend a given app should use (under control of either the app or the user). Legacy 8bit sound (and associated things like the sound scheduler) will be tied to a single output only. |
jim lesurf (2082) 1438 posts |
Going for SoundDMA as the level where the ‘choice of output/input device’ is controlled makes sense to me. It should make it much easier for the user to choose a USB device and bypass any annoying limitations of the ‘mainboard’ sound hardware. However bear in mind that it would be desirable to support 24bit samples as well as 16. Particularly given that 24 and 32 bit transfers are now common. Jim |
Dave Higton (1515) 3526 posts |
Yesterday I looked at the HAL interface for enumerating devices. A USB device would not fit the descriptors at all well – arguably not at all. I haven’t looked at what SoundDMA wants, but if it’s at all similar, we’re going to have to scratch our heads to decide what to do. The HAL assumes that a sound device has just one word width. It’s only interested in 16 bits. 8 bit devices belong elsewhere. I have two devices that offer both 8 and 16 bit word lengths – and one of them offers 20 bit words too. Of course these are not available simultaneously; the devices are programmable. Even for devices that only provide longer word lengths, it’s usual to offer more than one. And if a device only does wasteful transfers, e.g. a 32 bit word with only 24 useful bits, which length should the system be interested in? The resolution or the transfer size? What is going to provide the buffer fill and empty routines? Is SoundDMA going to expect them to be fixed (when the device is plugged in, perhaps) and unvarying? The HAL descriptor expects routine addresses. Sorry if I’ve asked some daft questions – clearly I haven’t done my research before posting. |
Jeffrey Lee (213) 6048 posts |
Yes, SoundDMA and the HAL audio device API are in need of some changes. So what we need is for someone who’s familiar with USB audio (or audio driver implementation on other OS’s) to come up with a spec that will cover everything that’s required.
The way things work at the moment is that SoundDMA is in charge of the buffer memory (actually, the kernel allocates it on boot and SoundDMA just uses a hardcoded address). A simple double buffering method is used. The audio hardware is expected to read data out of the buffers by one of two methods:
Again, this is another area where improvements are required/desired. E.g. SoundDMA (or the audio device) should allocate the audio buffers, not the kernel. We may also want to add support for there being arbitrary numbers of buffers rather than just two – e.g. the audio device decides how many buffers it needs and then just sends "fill this buffer please’ requests to SoundDMA at the appropriate times. And yes, SoundDMA does expect all the code/data pointers in the descriptor to be fixed, set in stone at the time the HAL device is created. Are there situations where they’d need to change? |
Dave Higton (1515) 3526 posts |
I don’t know. My only experience is in writing a crude app, and the buffer fill routine was within the app. Either the app provides it, or the app has to know where to find it. Since the device is not permanent, something has to provide those routines. When exactly should they be provided? Do they need to change according to how the device has been configured by an app? |
Chris Hall (132) 3554 posts |
However bear in mind that it would be desirable to support 24bit samples as well as 16. Essential to support 16bit and 8 bit sound of course. Otherwise software using voice handlers will not sound correct! |
Jeffrey Lee (213) 6048 posts |
And yes, SoundDMA does expect all the code/data pointers in the descriptor to be fixed, set in stone at the time the HAL device is created. Are there situations where they’d need to change? SoundDMA expects them to be present at the time the HAL device is created, and to remain in existence until the time the device is destroyed. At the moment there’s no particular reason for them have to change in response to the device configuration changing. But in a world of USB devices with lots of different supported audio formats, output paths, etc., perhaps the new design for the audio device will have to allow for some things to be changeable. So assume that things will be staying as they are until you find a reason why they need to change. However bear in mind that it would be desirable to support 24bit samples as well as 16. I’m assuming that the new version of the sound stack will be capable of converting both 16bit & 8bit audio to whatever output format the hardware is using. (Remember that every RISC OS 5 machine – except old RiscPCs – already translate 8bit sound to 16bit in software). I’ve deliberately said ‘sound stack’ above rather than SoundDMA as it might not make sense for SoundDMA to be the one doing the conversion. We only want format/sample rate conversion to be performed in one place, so it should probably be SharedSound that adapts itself to produce audio in the format that SoundDMA requires. This may mean we have to deprecate the Sound_LinearHandler SWI and replace it with a new interface (since SoundDMA may no longer be able to accept stereo 16bit linear audio) For 8bit audio SoundDMA will probably have to deal with the conversion itself, as it does now. Potentially it can call through to SharedSound if there are some really complicated cases – e.g. if the output format is non-linear (like the 8bit log audio itself) then it would make sense for all the audio to be mixed within SharedSound and then converted to log format, rather than SharedSound mixing and converting its audio and then SoundDMA trying to mix in the 8bit audio ontop of it. |
Jeffrey Lee (213) 6048 posts |
Actually, 8bit audio is already partially handled by SharedSound – what happens at the moment is that SoundDMA converts the 8bit audio to 16bit (placing it in the final output buffer) and then calls the linear handler, with the assumption the linear handler will mix its output into the buffer (although in practice some handlers may opt to overwrite the data). In a future world of multiple crazy output formats we could add an extra step on the end of this, which takes the output of the linear handler and converts it to whatever format the hardware requires. This is also the point at which SharedSound will mix in its data. SharedSound will no longer be using the existing linear handler interface, but since SoundDMA is converting the 8bit audio to 16bit, a 16bit buffer will still be required – so we could keep the linear handler around for backwards-compatibility. Note that there’s no loss of precision when converting the 8bit audio to 16bit, so the fact we’ll have an extra conversion step (compared to letting SharedSound convert the 8bit audio itself) shouldn’t impact the quality of the audio. |
Dave Higton (1515) 3526 posts |
Is SoundDMA documented on this site, or only in the RO source code? If there are docs here, my search has failed to find them. I’m guessing that SoundDMA provides some way to enumerate devices and their capabilities or that a way to do that needs to be added. HAL devices may not be too much of a problem to represent in that they may not be very configurable (I don’t know). USB devices are configurable in word length, sample rate, and number of channels. As such, a fixed descriptor is not a good way to represent them. I was reasonably pleased with the USBAudio module’s API for enumerating devices. Maybe we could extend SoundDMA to enumerate devices in the same or a similar way. |
Jeffrey Lee (213) 6048 posts |
It’s documented under “Sound” on the wiki, since the three main sound modules all use that as their SWI prefix. SoundDMA documentation is complete (I think), while the others are still rather patchy. https://www.riscosopen.org/wiki/documentation/show/Sound%20SWI%20Calls
Currently there’s no way of enumerating devices. The module just uses the first HAL device it finds which initialises correctly.
Currently the audio HAL devices aren’t very configurable, because they’ve been designed around the limitations of SoundDMA. For representing USB devices we can just introduce a new major API version which makes things a lot more flexible.
I’ll admit I haven’t tried USB audio yet, despite buying a device several months ago for testing with! Another piece to the puzzle is SoundCtrl, which handles the mixer devices The RISC OS API isn’t documented on the wiki, but it is documented in CVS: https://www.riscosopen.org/viewer/view/castle/RiscOS/Sources/Audio/SoundCtrl/Docs/SoundCtrl?rev=1.3 Unlike SoundDMA, SoundCtrl does support multiple devices – the ‘system’ parameter is used to identify which device to manipulate. |
Jon Abbott (1421) 2651 posts |
I’ve been slowly working my way through documenting SoundDMA and SoundChannels in intricate detail on the Wiki. I’ve combined what’s in the PRM with what I’ve discovered from the source, so what’s there should be accurate. I’ve a lot more to add though as Jeffrey has spotted. I was running into a lot of problems when coding the sound correction in ADFFS due to the lack of detail in the PRM, so it seemed sensible to get it on the Wiki for all to use whilst I discovered it. Whenever I go back to the sound code, I add a bit more to the Wiki, so I’ll eventually finish the job. If there’s any bits of detail missing that you need, let me know and I’ll prioritise them. |
Dave Higton (1515) 3526 posts |
Thanks, Jon. My USBAudio API is documented here Does that provide a good basis for extending SoundDMA to handle both built-in and hardware devices? USBAudio_EnumerateDevices returns a string containing a comma-separated list of the USB device names. It would be possible to extend SoundDMA so that it returns “INT0” or “INT0,INT1” etc. as internal device(s) followed by the USB list, for example. How’s that for starters? Basically I’m thinking towards what an updated SoundDMA API might be. |
Jon Abbott (1421) 2651 posts |
My USBAudio API is documented here The API looks good to me, but I’m no expert on either USB or how the RISCOS devs would like multi-path audio implemented, so not best place to offer an informed opinion. Having said that, SoundDMA is fairly simple in nature in that it just needs to know where to copy the audio data provided by the Channel Handler / Linear Handler and at what rate. Options for extending SoundDMA include:
SharedSound would also need modifying if we were to implement multi-pathing. Another option worth considering is to shifting the uLaw conversion from SoundDMA to SharedSound and leave SoundDMA to deal with passing audio streams from Channel Handlers to SharedSound and then pass the final mix it gets back to the relevant audio device. I believe the SharedSound API was designed to support 8bit but was never implemented – I found some detail on it somewhere, but can’t find it at the minute to link to it. |
Dave Higton (1515) 3526 posts |
For compatibility with existing code, it would be nice to extend SoundDMA’s API with something similar to what’s there. But Sound_SampleRate can’t cope with a continuous range of sample rates, for example – it can only cope with discrete values. Maybe the thing to do is to add on a set of calls the same as for USBAudio. Where USB devices are addressed by passing R0 → “USB8” (for example), the internal device(s) is/are addressed by passing R0 → “INT0” etc. I think it may be easier to map internal devices onto a USBAudio-like model rather than the other way round. But fundamentally USBAudio is stream-based, where SoundDMA is linear-handler-based. Which is the better way to go? I’ve never played with linear handlers, so I don’t know. |
David Feugey (2125) 2709 posts |
Yep. And for drenderer perhaps the solution is not patching it to make it be 88 kHz compliant, but patching it to make a 88 kHz output from a 44 kHz one. Then no need to change existing titles that use it. |
Jeffrey Lee (213) 6048 posts |
My USBAudio API is documented here That looks like a reasonable basis to me too. If it can deal with most/all quirks of USB devices then it should be more than adequate for the existing HAL audio devices (although I note that some things, like channel mapping/ordering within the data stream doesn’t appear to be covered by your API). USBAudio_EnumerateDevices returns a string containing a comma-separated list of the USB device names. It would be possible to extend SoundDMA so that it returns “INT0” or “INT0,INT1” etc. as internal device(s) followed by the USB list, for example. How’s that for starters? Is it really necessary to use string identifiers? If it’s something intended to be used in configuration files (similar to the GraphicsV driver identifiers) then it should be something which isn’t likely to change between reboots (a bit tricky for USB devices, granted). But if it’s just something assigned dynamically at runtime then it might as well just be an opaque int. USBAudio_EnumerateDevices could also do with specifying what happens if the supplied buffer isn’t large enough (also, is ‘Enumerate’ really a sensible name if it returns the full list of devices in one go?)
Yes, definitely. We also have the advantage that there aren’t (yet) any machines where USB audio is the only option available – so if it means that some of SoundDMA’s existing functionality isn’t available when using a certain USB audio device (to the point where the USB device can’t sensibly be made the default output device) then that shouldn’t be a big deal. The user can just go back to using the onboard audio, or buy a better USB device!
For USBAudio, the application pushes data into the stream, and it’s the application’s job to make sure an appropriate amount of data is kept buffered at any given time. For SoundDMA (whether using the 16bit linear handler or an 8bit voice generator), the sound system calls the application whenever it needs a buffer filling, and the application must make sure it’s able to supply the new data immediately (since it’ll be called from an IRQ handler). The IRQ-based nature of buffer fills does make things a bit tricky, e.g. the HAL version of SoundDMA does need to make use of RTSupport to call the linear handler/channel handler because you’re not allowed to enable interrupts from within the DMASync callback that comes from DMAManager. But ultimately it does result in low-latency, low CPU overhead sound output without any buffer underflows (unless something further down the chain has failed, e.g. a streaming music not being able to buffer data from disc or from the network fast enough). For audio input (which we don’t have an API for yet – but this would probably be a good time to add one), I’m of the opinion that a DeviceFS stream based interface would be a better choice than a callback/IRQ based system, I think mainly because it’s easier for application authors to use. You can still get fairly low latency by listening out for the buffer fill event from BufferManager, but it’s not likely the end of the world if the foreground task which is reading from the stream gets blocked for a while and a backlog of a second or so builds up. |
Colin (478) 2433 posts |
UsbAudio can work the same as SoundDMA in that the buffer can be filled from a buffer emptying upcall. This is called from the irq handler with interrupts disabled. Latency would be related to the size of the usb buffer required. Pros of the SoundDMA scheme is that audio can be played in the background and would be unaffected by the non cooperative multitasking. Cons you can’t play data from application space The only problem I see with patching the existing system to use USB audio is does it cope with audio devices disappearing. |
Colin (478) 2433 posts |
Unfortunately I think it has to be a callback/IRQ based system. Recording to application space is dependent on multitasking returning before the input buffer is filled – not very reliable. So we are left with recording in the background to a disc or non paged out memory. |
Jon Abbott (1421) 2651 posts |
We’re probably taking about streaming here, as Voices require Modules as stated in the PRM. The simplest suggestion is have SoundDMA/SharedSound allocate a Stream DA for the application when it registers the Stream, that’s sufficiently large to cover CMT delays. 1 or 2 sec of audio data perhaps, we’re not going to miss 1MB going to a circular buffer for a Stream on modern kit.
We could add that via a Service call (Audio_DeviceRemoved?) and have SoundDMA shutdown the DMA/Channel Handler etc cleanly.
The current scheme can stay as is, with SoundDMA triggered off IRQ and calling the Channel Handler / Scheduler. For Application Streams, there needs to be some integration with task switching (WIMP?). A Wimp message perhaps which would switch the app in and allow it to stream more audio perhaps, done via a callback from SoundDMA? |
Colin (478) 2433 posts |
I have usb audio buffered to application space at the moment and it isn’t good enough if you want to play something in the background and do something else which may include singletasking. I’d also expect the lower latency of background processing to smooth out the disc reading making playing while using the desktop a smoother experience – application space playing generally needs bigger reads from disc. |
Dave Higton (1515) 3526 posts |
We have to be careful not to preclude real time communication apps, like a VoIP phone app. Clearly they must work with the absolute minimum of buffering. But there has to be an easy way to get the real time stuff not to be blocked, whereas the GUI (which controls and monitors) can normally afford to be blocked for a while. We must be able to specify that replay buffers must be emptied and played out immediately, for applications that require such behaviour. It has always appeared to me that these callback/IRQ systems require the programmer to jump through multiple hoops that are not well explained. I’d appreciate any easing, any recipe, any tutorial with examples. I’ve done a VoIP phone app (still available from my web site) that does everything from within the app, which is problematic when other tasks are used (even when windows are dragged around). Web browsing during a call is out of the question :-( |
Jeffrey Lee (213) 6048 posts |
It does have partial support – on a Raspberry Pi you can RMKill BCMSound and then reinitialise it later and sound will start up again (technically the same would work on other machines using SoundDMA_HAL, but on those machines the HAL device would have been created by the HAL rather than a module, so is a bit harder to deregister) Service_Sound 9 & 10 will be issued as audio starts & stops For audio input (which we don’t have an API for yet – but this would probably be a good time to add one), I’m of the opinion that a DeviceFS stream based interface would be a better choice than a callback/IRQ based system, Internally it would still use callbacks/IRQs to take data from the device and store it in the DeviceFS buffer. But from the application’s viewpoint it would just be reading from a DeviceFS stream. Buffering then becomes a problem which can be solved once within SoundDMA, rather than individually within each application. To help with buffer overflows the application can indicate how big it wants the buffer to be… or it can listen out for the DeviceFS buffer filling callbacks and pull the data out as soon as it arrives (or register a callback which can then stream it out to disc, send it over the network, etc.) But if people want a raw interface without any buffering within SoundDMA then I’m fine with that as well. |
Colin (478) 2433 posts |
Yes and that would be great if we had Premptive multitasking where the application has some quality of service and can fill the buffer by polling. In RISC OS you can only get quality of service for polling by using a ticker handler from a module but what is the point. Once you are programming in module space you may as well drive your buffer filling from irq/callback events. DeviceFS or similar is a good idea for playing from application space with its inherent limitations however my opinion of the DeviceFS says avoid it like the plague. |
Jeffrey Lee (213) 6048 posts |
I’m talking about using DeviceFS for audio input, not for output. The application won’t be filling the buffer, the hardware driver will be. The application only has to worry about emptying the buffer before it overfills. |