Multiple audio devices
Jon Abbott (1421) 2651 posts |
Adding to Jeffrey’s explanation of SoundDMA etc, here’s the flow diagram. For the purposes of simplicity, SharedSound can be considered identical to SoundChannels/Voice Generators, but is 16bit only and can have any number of channels: |
Jon Abbott (1421) 2651 posts |
Sound is also IRQ based, using DMA and very dependent on timing to avoid underruns. I should probably have explained that slightly differently and made it clear I was referring to audio output. Sound within RISCOS is Asynchronous, the device outputting the audio raises an IRQ when the buffer is empty, which SoundDMA then actions. Unless I’m misunderstanding the way USB audio works, I believe its Synchronous, meaning the USB device is not generating an IRQ to request data, but is instead reliant on the source to pass data on a regular timely basis. |
Colin (478) 2433 posts |
USB also has Asynchronous audio devices. USB only transfers data on the USB clock but it can vary the amount it sends each tick. So for USB1 and 2 channel 96000 frequency 4 bytes per sample you start sending 768 bytes every tick. The device monitors the sample rate it is receiving and if it is not correct sends a feedback message for you to increase or decrease the samples per second you are sending so the number of samples per tick you are sending may be varing continuously In the non usb audio device the IRQ is not the controller of sample consumption by the device – IRQ’s are not that accurate – it is just a feedback message that the sound fifo is emptying. The controller of sample consumption is the device’s audio clock. I doubt the IRQ would work if data was not being sent to the device so can’t be used as a general purpose audio clock to drive the rest of the system. As I see it the sound IRQ and the DMA request should have no place in the sound system other than the Audio hardware part of your diagram. Presumably DMA is how you feed the sound chip just as USB has its own DMA interface. I think you need a generic FIFO interface in the audio hardware. The Audio device consumes data from the fifo – if it is available – and the FIFO emptying raises a fill request from the backend driving the system. This is likely once per millisec for a usb1 device 8 ticks per millisec for USB2 and once per IRQ request from the motherboard device. |
Jon Abbott (1421) 2651 posts |
If you take VIDC/VIDC20 as an example here, the IRQ is raised when the current sound buffer has fully emptied. This is why there are two audio buffers in SoundDMA; once the first is exhausted MEMC/IOMC immediately switches to the second buffer and raises an IRQ for the first to be filled. SoundDMA then has a finite amount of time (buffer size*sample period/8) to ensure the exhausted buffer is filled and the DMA start/end set for the next buffer.
The IRQ is the driving factor in the sound system and as far as I’m aware, sound is DMA driven on all RISCOS platforms, from the A305 up to the latest – someone will have to correct me if any of the more recent hardware deviate from this.
SharedSound implements a FIFO buffer and is the preferred interface for audio, obviously its still called by SoundDMA when the hardware raises an IRQ, so is expecting subscribers to provide additional audio data on demand and not in a streamed fashion. There’s nothing to stop audio from being streamed, you just need a Module that buffers the audio data into a FIFO and then passes the fixed size blocks of audio to SharedSound / SoundDMA when requested. This is how I implemented the sample rate conversion in ADFFS; ADFFS will call the game’s Channel Handler when the FIFO drops below a threshold and in the background is passing fixed blocks of audio to SoundDMA at the rate the hardware is requesting, so they’re both running at independent sample rates, sample periods and IRQ rates. |
Rick Murray (539) 13840 posts |
Please define “DMA”. |
Jon Abbott (1421) 2651 posts |
The audio hardware is reading the audio data from the sound buffer through Direct Memory Access. You obviously need to fill that buffer, which is triggered by the IRQ and actioned by the processor. |
Andrew Rawnsley (492) 1445 posts |
I may have missed this in an earlier post, but I’m trying to weigh the upsides of each approach. Jeffrey’s initial SoundDMA methodology seems to have the upside that it is as close as possible to the current way of doing things. This should, I’d imagine, minimise compatibility issues (this is really important from my perspective). I’m not immediately seeing the upside to the DeviceFS approach, other than it (possibly?) fitting better with USB audio devices? I suspect I’m missing something fundamental here (I’d imagine that Linux audio works more akin to DeviceFS or something?). Easier for programmers too, I think? That’s good, but we have things like PlayIt and DigitalRenderer etc. to assist with audio playback. The only area I think we need to address soonish is probably VOIP audio projects, which presumably need simultaneous record/play. Does blocking file i/o become a problem there? Now, bear in mind that the driving force here is being able to switch between analogue and HDMI audio (and possibily SPDIF on boards that support it), and a desire to get this working soon-ish. USB audio is important too, but being realistic, I cannot imagine more than a small portion of users running USB audio as primary device. The API needs to handle all options (and that definately includes USB devices – they’ve been out in the cold too long), but I think the focus needs to be on: a) Compatibility – existing audio software such as PlatIt, DataVox and DigitalRenderer, not to mention the DigitalCD modules / AMplayer, need to work without modification. b) 95% of users will be running either analogue audio from motherboard, or HDMI audio c) USB routing of all audio needs to be possible, transparently. However, this shouldn’t be the focus at the expense of all else, nor at the expense of undue delays. Since USB routing of all audio isn’t currently possible at all, it would make sense that updating USB audio to support whatever API is selected would make sense, rather than tailoring the API towards USB audio. Sorry if this seems negative on DeviceFS approach – I’m just wanting to understand the benefits of what appears to be a fairly fundamental change in strategy to how audio is handled under RISC OS. Perhaps it’s not as big a change as it sounds? Side question… Pi currently does HDMI audio thanks to the hardware, but how do we (or don’t we?) switch between analogue and digital audio on Pi? |
Colin (478) 2433 posts |
Jon. Ok so SoundDMA is using a ping pong buffer to transfer sound. That’s workable I think,you just have the Audio devices register with soundDMA the addresses of the 2 blocks of memory for SoundDMA to use – SoundDMA is a rubbish name for it it implies that the DMA is in SoundDMA when it is in the Audio Device. Each Audio device registered with SoundDMA its own pair of memory blocks,
Audio devices register a ping pong buffer with SoundDMA IRQ devices will use an uncached pair of memory blocks and call SoundDMA_SwapBlock every IRQ and USB_Device will use a cached pair of memory blocks and transfer the data to its buffers on buffer emptying events. Edit: added block sizes when registering |
Colin (478) 2433 posts |
So then you need a common Audio device interface for configuration – something like Serial devices use osargs – and you end up with a version of DeviceFS which uses a pingpong buffer instead of a FIFO. Any of the list of Audio devices, USB, HDMI, Motherboard, pci could then be plugged into the back of SoundDMA to make the ‘system’ sound work. Alternatively the system sound can be bypassed altogether and programmers can access a device directly via a common interface. |
Colin (478) 2433 posts |
I’d also Register the other way around. SoundDMA registers with the device to get its buffers |
Colin (478) 2433 posts |
You could of course use DeviceFS and not a version of it. Part of the ioctl interface would be read pingpong and write bufferswap function. The DeviceFS FIFO interface would just feed the ping pong buffer and would be disabled when the pingpong is in use. That way you get a way to refer to the device, the devices appearing in devices:$ for programmers/users to select from and if you want to prototype a new mixing/upsampling algorithm you can just pluggin a usb device and stream to it using the fifo interface. You could even test it in BASIC if you wanted. |
Tony Noble (1579) 62 posts |
USB soundcards are very common and pretty cheap for the quality achieved – anyone who wants decent sound off a Pi would be running one, as will a lot of people fed up of their sound glitching when certain system activities are taking place. Anyone using gaming type headsets will be using one, even if it’s for Skype or some other VOIP app, as will anyone recording music. Because these maybe don’t have ROS versions of the app doesn’t seem like a decent reason to write them off. Designing any system based around what’s onboard a small range of devices seems somewhat short-sighted – surely any sound system needs to be utterly agnostic to the hardware beneath and also the method of its attachment to the machine? That’s up to the device driver to sort out and comply with a defined API… |
Andrew Rawnsley (492) 1445 posts |
Tony – I didn’t suggested writing them off – both APIs seem to support USB audio devices. I merely feel that the focus should be on max compatibility with existing software, and the speed/ease at which Jeffrey can progress. The immediate need is HDMI audio (why I asked Jeffrey to look into this) since modern displays are tending to drop all analogue inputs, ie. assuming HDMI == video+audio, and no DVI inputs. As long as USB devices are also accommodated, it’s win-win :) If it came down to “change the OS to fit the needs of USB audio” or “change USB audio software-implementation to fit needs of RISC OS”, I’d choose the latter for the reasons mentioned above (since USB audio is still WIP). NOTE – it doesn’t seem to be quite so black-and-white, so forgive the over-simplification. Colin’s posts made some good points/suggestions, but really it’ll be down to Jeffrey as to what he feels is the most practical and least likely to break things. For example, PlayIt currently offers drivers for “Acorn16bit”, “Lark” (CC podule) and “SharedSound”. Any app using PlayIt can thus play samples via those devices. “Acorn16bit” is an older driver that ties up the sound system, “SharedSound” mixes with other apps. Any low-level API update should still allow SharedSound (and probably Acorn16bit) to work “out of the box” (for HDMI and, assuming code is in place, USB). As a side note, is there a PlayIt driver for USB audio? |
Jeffrey Lee (213) 6048 posts |
As Andrew says, the driving force behind these changes is to get HDMI audio working soon-ish. Although I’d like for us to have an all-singing, all-dancing audio system, the reality is that there’s too much work and not enough developer time available for a developer to be able to complete a large project in one go. Work has to be broken down into smaller chunks in order to try and maintain a healthy balance between improving the OS/implementing new features, fixing bugs, and getting new hardware ports up to a state where they’re worthy of an official stable release. In my first post I mentioned that my goals were: * To allow multiple devices to be present at once In terms of public API changes, this means we only really need the following:
With my original proposal:
If I’ve been keeping up with the discussion correctly, Colin’s proposal is/was to have all the audio devices become entries in DeviceFS, with one central module in charge of managing the DeviceFS entries (as opposed to having each individual driver module create its own DeviceFS entries). He also wants each device to be able to accept non-native data formats/sample rates and internally convert it to whatever format the hardware requires. This throws a spanner in the works – although I don’t want to implement DeviceFS support now, how can we design the changes that are being implemented so that they won’t make a future switch to DeviceFS more ugly (from an API perspective), or make it more difficult than it would be to switch to DeviceFS now? To answer this question, I think we’d need to answer the following questions:
|
Rick Murray (539) 13840 posts |
??? We have HDMI audio – it’s how I get sound out of the a Pi (via HDMI to VGA adaptor). What would be nice is the ability to switch at runtime (instead of a line in CONFIG.TXT) and to be able to use both at the same time. |
Andrew Rawnsley (492) 1445 posts |
Rick – thanks for answering my question about how audio is switched on the Pi :) Assuming you weren’t just being flippant with your “we have HDMI audio”, none of the non-Pi devices currently implement HDMI audio. This is becoming a real problem – of four high resolution devices (ie. >1080p) that I’ve been testing with, only one supported analogue audio. Curiously one of the others had an analogue audio connection, but it was an output for headphone hookup – assumption was audio via HDMI or Displayport. This methodolgy is also seen on monitors without audio capability – HDMI audio connections are supported with analogue audio outs. It all adds to a picture where HDMI audio support is needed on non-Pi devices. Hopefully Pi will benefit from easier switching, too :) |
David Feugey (2125) 2709 posts |
If we could solve the 88 kHz problem too… |
Colin (478) 2433 posts |
No Jeffrey I’ve moved on from there. Every device is registered with DeviceFS. You can write a module to do that for individual devices or sets of devices. The USB audio for example may create audio devices for all relevent usb devices hal devices would have their own module and register with DeviceFS accordingly. System sound is an audio out PCM player so all pcm audio out devices register with DeviceFS as devices:$.audio.out.pcm.dev0 to devxxxxxxx ie each directory in the devices:$ is a class of device. midi out devices would register as devices:$.audio.out.midi.dev0 A new module for a devices:$.audio.out.pcm device would simply enumerate the audio.out.pcm directory and register with a number 1 higher than the highest number used so far. The current sound system is only interested in PCM devices so would only accept devices from the devices:$.audio.out.pcm directory. Adding a null device to the directory would enable the system to start off output to a null device or switch to a null device if a device is removed To use the system you add a swi to SoundDMA
SoundDMA would reject any device with a different directory path. All this requires no changes to DeviceFS So all you need to do is Specify the OS_Args IOCTL interface for audio.out.pcm devices and you are good to go. You write a DeviceFS module for a hal device. The FIFO part is trivial – just fill the ping pong – or just make it act as a null device for now if you can’t be bothered. The interface you are interested in is OS_Args IOCTL where you get the pingpong buffer addresses and pass a swap_buffer function. IOCTL is also where you could enumerate frequencies read the sample size etc – that would be all defined in the usb.out.pcm API So this requires no changes to the current audio system except, on the face of it trivial changes to SoundDMA to allow it to work with multiple backends. Once you have written one DeviceFS Module for one device converting a copy for another device should be relatively easy. So the current system will be unchanged as far as applications go and the user is able to change the device used easily. So just to answer your questions.
It’s the device path as shown above. By defaulting to a null device the users default device can be set in !Boot by a configure plugin.
full stops?
You can use trees I suggest using that to signify the class of the device. The difference between different classes would be the programming API. Serial and USB devices should have done this ie they should have been devices:serial.dev1 devices:usb.dev23 instead of devices:serial2 devices:usb1 etc. The part of the name which is not a number is a class of device and tells you the API you expect to use with them.
Don’t know about Graphic cards.
directory paths solve that.
No. There is no enumeration swi and SoundDMA doesn’t know anything about multiple devices. It just knows about the one it is using at the moment
There’s no enumeration swis. You just use OS_GBPB to enumerate the ’devices:audio.out.pcm.dev*. If you make it a convention that dev0 is a null device and dev1 is the hardware device most likely to be used as system you just have to arrange the modules so that the preferred permanent hardware device id dev1 otherwise they are given numbers in the order that they are seen. You would ensure fixed device modules are before removable device modules so that the numbers don’t change depending on what removable device is plugged in.
use a tree. |
Rick Murray (539) 13840 posts |
I was being partially flippant (my default setting). ;-) To be more realistic, you’re going to run into hardware issues here. Consider the Beagle(xM). That’s an HDMI connector on the board. But it is DVI from the chip. They are the same thing as far as video is concerned, but no way no how are you going to get HDMI audio out of that machine. Maybe simpler to recommend people don’t buy crappy monitors with no analogue audio. I mean, god, how much does it cost for a 3.5mm socket and some MOSFETs to boat the sound? |
Colin (478) 2433 posts |
I missed out the sound system will not survive a removable device being unplugged. When it is unplugged SoundDMA replaces it with a null device and the user has to select the device again with its new number when it is plugged back in. To use a removeable device pemanently the user needs to arrange it so that it is the first removable device seen at bootup. |
Jeffrey Lee (213) 6048 posts |
There are video differences between HDMI and DVI. Consider the fact that HDMI supports interlaced modes, YUV, deep colour, etc. while DVI doesn’t. Even for basic video there are differences between a HDMI signal and a DVI signal (based on my experiences, a HDMI signal will not display on a DVI-only monitor). But none of this is a problem for RISC OS, because the video driver will be aware of the fact that the OMAP3 hardware is DVI, and so it won’t try offering HDMI audio through it. Users might get confused about the fact that the HDMI connector isn’t truly HDMI, but that’s an issue for them to take up with the beagleboard designers. |
Colin (478) 2433 posts |
When I think about it coping with fixed and removable is simpler than that. Instead of enumerating ‘devices:$.audio.out.pcm.dev*’ just enumerate ‘devices:$.audio.out.pcm.*’ then instead of having
you have
Then removable USB drives changing doesn’t affect other device numbers To write a new module for a usb device that doesn’t work with the main module for example you would add it to the list of usb leafnames |
Colin (478) 2433 posts |
To simplify the job even further register the device with 0 rx streams and 0 tx streams then you won’t be able to open the streaming interface with OS_Find and you won’t have to implement any fifo code. You then implement your audio.out.pcm interface via DeviceFS_CallDevice – you can’t use osargs as you need a file handle for that. That should get things working with the minimum code. You may be able to use it this way with Graphics devices so devices: would have a devices:$.graphics folder |
Rick Murray (539) 13840 posts |
Consider the Beagle(xM). That’s an HDMI connector on the board. But it is DVI from the chip. They are the same thing as far as video is concerned, Wrong way around. ;-) The DVI(-D) coming out of the Beagle is the same thing as equivalent video on an HDMI connector; namely one could plug Beagle into an HDMI monitor and it ought to work. As you point out – HDMI is a superset of DVI-D, therefore the same is not true in reverse. One can’t necessarily expect a DVI-only monitor to work with a device (set top box?) outputting HDMI.
This was referring back to Andrew’s “The immediate need is HDMI audio (why I asked Jeffrey to look into this) since modern displays are tending to drop all analogue inputs”, pointing out that the Pi already does HDMI audio, the Beagle just can’t, and I don’t know about the iMX6… The R-Comp site still says “Full ARMX6 (i.MX6) info will be added shortly.”, so I can’t, like, Google part number for info or anything. I’d guess it ought to in this day and age. Ditto Titanium (it looks as if the AM5728 does HDMI audio, looking at the Beagle X15 specsheet).
;-) It says quite clearly on page 25 of the reference manual: The BeagleBoard is equipped with a DVI-D interface that uses an HDMI connector that was selected for its small size. It does not support the full HDMI interface and is used to provide the DVI-D interface portion only. |
James Peacock (318) 129 posts |
This may be nonsense, however: Regarding hardware device registration, I seem to remember that HAL devices were to be used for that. A USB audio module would register devices with OS_Hardware 2. The core sound modules such as SoundDMA, SharedSound and/or a module to manage DeviceFS entries can hook into OS_Hardware for enumeration and Service_Hardware for notification. It looks like mixers are already handled this way. To be able for the core sound system modules to talk to hardware generically, there would need to be a limited number of standard HAL audio device(s) defined to cover whatever data shuffling methods (DMA, buffers, callbacks etc.) are needed. It looks like the HAL devices are flexible in this regard and the system is future extensible via defining new ‘standard’ audio HAL devices. Real motherboard hardware could be exposed by the HAL as a standard device, or it could expose a specific HAL device which a support module recognises. This module would expose a ‘standard’ audio HAL device, so acting as the glue to the core sound modules. |