Multiple audio devices
Jeffrey Lee (213) 6048 posts |
I’m currently working on a sound related task which is going to require us to finally tackle some of the issues relating to having multiple audio devices present in the system (but not necessarily multiple devices active at once). My goals/requirements are as follows:
I.e. the bare minimum to allow machines with multiple audio output devices (or things like removable USB audio devices) to be dealt with in a sensible and user-friendly manner. More advanced features – 24bit audio, surround sound, audio input, multiple devices being active concurrently, etc. – aren’t on my agenda, but I am trying to take their requirements into consideration in order to try and avoid coming up with any short-sighted APIs. The last time multiple audio devices were discussed, nobody seemed to object to my suggestion that SoundDMA should interface with the HAL devices, and that a higher level (most likely SharedSound) should manage directing different applications to different devices as desired. So with that in mind, here’s what I’ve managed to come up with so far: SoundDMAThere are three core design choices I’m proposing:
Taking the above into account, we can look at SoundDMA’s interfaces and work out which need modification:
That covers the current interfaces, but there will need to be some new ones:
For future versions where we’d want to support multiple concurrently active devices, we’d also have to consider a multiple-device aware SWI for setting the DMA buffer size, along with the new linearhandler-like SWI (and any associated stuff like selecting the sample format, and registering the buffer fill routine). But those changes should be independent of the changes required to get basic device enumeration and switching working (nobody’s going to need to configure the buffer size of an inactive device, because there’s no way they’ll be able to send audio to it without first making it the default device). SoundControlSoundControl manages the volume/mixing settings of any hardware mixer. Ostensibly, it already supports multiple devices (or multiple ‘systems’, in SoundControl terminology). But the implementation is buggy (current versions don’t set the device number when new devices are detected at runtime), and the API doesn’t really do anything to deal with the possibility of devices appearing in different orders (and thus being assigned different device numbers at the API level) across reboots. Luckily the docs mention ‘0’ as being the only valid device/system number, and all current machines only have either 0 or 1 mixer devices, so we’re free to change things up a bit in order to support multiple devices properly. I’m of the opinion that device names are better than numbers, so I’m suggesting that we make the following changes:
Potentially we could keep support for using numeric values to refer to SoundControl’s internal device list – e.g. the SWIs could assume that anything under 256 is a number. But I don’t think there’ll be much need for that as any new version of the sound setup plugin will be writing out configuration values using the device name rather than its number (and current versions of the plugin only deal with mixer 0). Since there’s a 1-1 relationship between audio mixers and controllers, if we stick to using SoundDMA-based device ID names then there’s no need to add an EnumerateMixers SWI to SoundControl – people can just use the SoundDMA SWI, and use the returned (string) IDs with SoundControl. Sound setup plugin and configuration settingsSince I’m currently only interested in one device being active at once, we can probably just go with the simple approach of having a dropdown list at the top of the window to select which device is the default one. The mixer settings below will then only show the settings for that device. However the plugin will probably remember and save the mixer settings for all devices, so that if you switch between devices on a regular basis you won’t have to set everything up from scratch each time. It would also make sense to use the one configuration file it saves to store the commands necessary to switch to and enable the default device. Note that I don’t consider sound settings to be important enough to bother with using CMOS for – so during ROM init and the first steps of the boot sequence the system will be using whatever the first device is that it finds, and then later on it should switch to the configured device. This will mean that the startup beep will go to the ‘wrong’ device – if people don’t like that then I could change it so that there’ll be a second beep issued when the config file is executed and the device is changed over? HAL device changesThe most sensible way I can think of for generating the device ID strings would be to make it the responsibility of the audio controller HAL device. So there’ll probably have to be a new revision of the audio controller device to allow for that. Initially I suspect I’ll have a backwards compatibility mode built into SoundDMA to allow it to cope with the old devices (e.g. disable the new SWIs/APIs if it detects an old device as the first device in the system). But then once the code is checked in it shouldn’t take long for me to go through all the HALs and update them to use the new device version, allowing the compatibility mode to be removed from SoundDMA (and as a result, enforcing a requirement of the latest device version). IOMD & TungstenThese platforms have yet to be migrated to using the ‘Sound0HAL’ version of SoundDMA. I’m not planning on updating their versions of SoundDMA to support the new features, but that should be OK, as the sound setup plugin will need to stay backwards-compatibile with older OS versions, and SoundControl can easily be made backwards-compatible too. Also some of the changes I’m currently working on for Sound0HAL should make it easier to migrate IOMD & Tungsten to using it. Does this approach sound OK with everyone? Speak now or forever hold your peace! |
Dave Higton (1515) 3534 posts |
There’s one tiny detail that I can see: it’s possible to have two identical USB audio devices. If all we do is interrogate them for their names, they would be identical and therefore indistinguishable. Unlikely, I know, but there are many cheap devices these days. Think of someone doing multitrack recording on a shoestring budget. The best I can think of is to append the USB identifier number, probably in brackets, to the name, e.g.: LuckyChinee Audio interface (USB12) I’m happy to update the USBAudio module as required. I don’t know whether it’s found its way into RO distributions yet – I suspect not. Sadly, I think sample sizes and sample transfer sizes have to be taken into consideration from early on. There are devices out there claiming to support 24 bit audio, some in 24 bit transfers and some in 32 bit transfers. It’s not always multiples of bytes either – I have a device that carries 20 bit samples in 24 bit transfers. (At least, 20 bits is less dishonest than 24 bits!) |
Colin (478) 2433 posts |
I think it should be implemented using deviceFS. The devices are identified in the order that they are registered. Fixed devices will always have the same number after you have switched the computer on because they are always there. So a computer with 2 built in audio devices would register as audio1, audio2 etc just as you get serial 1 and serial2 for serial devices. Removable devices are given numbers based on an enumeration of the devices:$ directory so the first removeable device will be audio3. If a module is written for a different device it can work out its number from reading the directory. There isn’t a way to uniquely identify a device – they don’t have serial numbers. The stream can be configured using the special fields. the driver can ask each device to enumerate its features (the device input) and use this information together with the special field (which is the caller input) to work out what filters mixers are required to play the stream. Features that can be read would include human readable device descriptions. So you tell it what you want with the filename eg ‘audio1#samplerate44100;samplesize8;nomix;noconvert:’ would not open if it was already in use because the nomix option is set and would do the minimum conversions necessary to play. It may also be possible to play something using redirection. |
Sprow (202) 1158 posts |
My only comment/observation is to flag up that the strings the HAL provides are not internationalised, and not intended to end up in user dialogues. Obviously (like *Configure commands) it’s OK to write to an obey file etc… but you’ll need some extensible way to lookup the device name for the configure plugin. |
Jeffrey Lee (213) 6048 posts |
Indeed!
Yes, that’s probably the best way of dealing with it. Should be easy enough to build into the USB audio driver.
I don’t think I agree. The way I see it, using a callback-based approach for buffer filling has a number of advantages to a file-based approach:
With files your code would have to rely on a lot of guesswork to try to ensure the buffer is filled at the correct rate, and that there’s always enough data available to avoid underflows. Checking for underflows might not even be as simple as checking to see if the file is empty – you might find that the driver requires a minimum amount of data to be in the file for it to be able to pull the data out and process it. I’m happy with people building a DeviceFS layer ontop of SoundDMA/SharedSound, but I don’t think it’s the right way to go for the lowest level of the interface. The most sensible way I can think of for generating the device ID strings would be to make it the responsibility of the audio controller HAL device. Good point. What if we made it so that SoundDMA would generate a messages token from the two-byte HAL device ID and look it up in its messages file? If the lookup fails it will then fall back to using the (English) name provided by the HAL device. For HAL devices which are implemented in modules, we can require the module to provide localised names (as part of the device struct), but for devices implemented in the HAL we can rely on SoundDMA’s messages file to override the default text as required. The scheme could easily be extended to other HAL devices (e.g. have a shared messages file somewhere in ResouceFS, and a SWI in the kernel for getting the localised name of a device). |
Colin (478) 2433 posts |
The best I can think of is to append the USB identifier number, probably in brackets, to the name What is the point. The usb driver number only identifies the device while it is plugged in. It has no value in identifying the device as any other number. It seems to me that you are continuing the same mistakes as the current system where the normal user is virtually excluded from using the sound system in application space and have to use 3rd party apps to ring a bell. I think all of your objections can be done in deviceFS and it produces a unified interface for all streaming devices. Even if it doesn’t the solution is to change DeviceFS not ignore it and go your own way. I understand it’s easier to knock up a few swis to get things working but to expose the back ends to the user is a poor design choice in my opinion.
The lowest level should be in the backends. The user shouldn’t see them. A tacked on DeviceFS would be just that tacked on you would then have all devices with 2 identifiers one for devicefs and one for your system. If you decide not to use devicefs now then there is no point in implementing it – only one of the systems will be maintained as new devices appear. |
Jon Abbott (1421) 2651 posts |
The use of the sound system in Application space was deprecated when RISCOS 3 came out, if you read the PRM it clearly states sound should be done in Modules. I have to agree with Jeffrey, DeviceFS should play no part in SoundDMA. There’s nothing to stop someone writing a DeviceFS shim over SoundDMA, although the sound data would still need to be coming from a dynamic area so it doesn’t break when tasks are switched. |
Sprow (202) 1158 posts |
the strings the HAL provides are not internationalised, and not intended to end up in user dialogues Something like that sounds fine. I think the nearest thing I’ve spotted on my travels is the WaveSynth module which allows alternative wave tables via multiple instantiation, and tries a similar scheme of looking up a token then giving up and using English if no token found. SoundControl or SoundDMA could host a few of the most common names, a bit like Territory Manager has a list of known territory names, even though typically only 1 is ever present. |
Rick Murray (539) 13851 posts |
I have no comment on how sound is implemented. So long as DigitalCD works, I don’t really care. ;-) I would, however, like to pass a small observation – why is sound in DeviceFS such a bad thing, when DeviceFS (for better or worse) is pretty much how we talk to the entire USB system? Is sound not a “device”? Or is there some sort of Just asking… |
Jon Abbott (1421) 2651 posts |
That’s part of the problem, it wouldn’t cover non-USB devices. Sound is also IRQ based, using DMA and very dependent on timing to avoid underruns. |
Colin (478) 2433 posts |
As is usb audio. |
Jeffrey Lee (213) 6048 posts |
The best I can think of is to append the USB identifier number, probably in brackets, to the name But if it remains plugged in, and the user doesn’t reboot (or do anything else which might cause the device numbers to change), it can be a useful aid in order to help the user tell the difference between two identicaly-named devices.
I’m not ignoring DeviceFS and going my own way – I’m simply sticking with the established method, which is to have the sound drivers call the audio generators directly. Remember that I wanted to use DeviceFS for audio input but you shot me down! But you’re right, if there are problems with DeviceFS (which there certainly are!) then we should probably look into fixing them. I think the main problems I have with using DeviceFS for audio output are:
If I’ve read your proposal correctly, then you’re suggesting that it’s the responsibility of the individual driver modules to create and maintain their own DeviceFS entries – i.e. there’s no centralised control. So under such a system:
|
André Timmermans (100) 655 posts |
I would also add that all file access under RISC OS blocks the system till the access is completed. This means that by pushing to a DeviceFS file you block everything while waiting for the device to consume/buffer the whole chunk of data you passed to it. |
Colin (478) 2433 posts |
The identifier isn’t the place to describe the backend just as the USB identifier doesn’t tell you what the device is. for human identifiction of a usb device you have *usbdevices to map the computer readable id (USB8 etc) to a human readable identification. All that you need for an identifier is Audio1 Audio2 etc. Or better still Speaker and Microphone to differentiate in and out. Each backend can then supply a ‘serial number’ if you need further identification. A usb device could use the address path to the port + vendor/product/version as a ‘serial number’ to identify it. Ignore the port address part if the vendor/product/version isn’t duplicated in the registered devices. How does Audio1#USB8 help you identify the device – you would have to do *usbdevices when you should be doing *audiodevices. The audio module can interrogate the backends for their details for you.
I never thought that I came across as that forceful :-) I think I understand the problem it is trying to solve better now so have changed my mind.
I thought that. I’m not sure but I think Buffer_linkDevice could be used which is basically a callback when data enters the buffer and isn’t broadcast. If it’s not quite right Buffers could be modified to suit.
We are talking about a frontend for all audio devices. A frontend based on a motherboard interface isn’t going to be very flexible. As I see it a devicefs layer would enable multiple users to open a device simultaneously the multiple buffers would feed the mixer which produces the data for the backend. If people want to use the sound system from the foreground so what, they would use a big buffer and have to put up with underflow which appears as click. The backends require the data to be available all of the time so that it can feed the device at the required rate. If data doesn’t appear (the buffer is empty) the backend just plays zeros. If the buffer fill routines are from a ticker event you would use a shorter buffer – not that it makes much difference whether you fill little and often or large and infrequent the rate of emptying is the same.
No there is 1 Audio devicefs module it manages the different audio devices in much the same way as the usb module manages usb devices and – although not a devicefs – EtherUSB manages USB ethernet devices. It’s the responsibility of the backend driver to be the ultimate consumer you have devicefs → software processing (mixer, resampler etc) → backend; once open the backend just ticks away consuming data whether it is there or not.
I’m afraid I’m not too familiar with them but given I don’t know any details SoundDMA doesn’t sound too useful for a usb backend so I would have thought that it would be a service for motherboard possibly pci backend. I guess Soundcontrol is the bit that does the software processing of the stream so that would be the bit between the devicefs and the backend.
As I see it DeviceFS will do the routing the name ‘Audio1’ is basically a handle to the backend. If DeviceFS devices are expected to cope with mixing and format conversion, will there be a common implementation of the mixing code which devices can share (if so, where), or will each device be expected to implement the code itself? Each device (backend) will have a set of capabilities. Each open stream to a device has a set of features. The bit between the DeviceFS buffers and the backend has to merge the buffers converting samplerates, and sample sizes as necessary. It finds out what is necessary by being told by the the opening device what it is giving you and you then asking the backend to match it or if the backend is already open converting the new stream to the same format as the open stream. You could have plugin modules between devicefs and the backends to handle mp3 possibly. Anyway I’ve wrote enough – it may give you a few ideas (thats if it makes any sense – it looks rather long). |
Tony Noble (1579) 62 posts |
A nice thing that OS X in particular does is proper abstraction of audio devices. So while you may only have one ‘device’ active at a time, it’s entirely possible to create aggregate devices from multiple pieces of hardware – inputs from device A, outputs from device B, midi from C, etc. It’s a very useful feature for even semi-serious audio work. Also it’s probably worth not thinking in terms of ‘microphone’ and ‘speakers’. A good portion of audio devices these days have multiple inputs and multiple outputs, of which any number may actually be used. |
Colin (478) 2433 posts |
If a physical device has multiple inputs and outputs I would expect each input and output to act as a separate device. So a single physical device may have multiple logical devices. Can you think of an instance when you would want to send to an input? microphone and speakers are just a way to remind you when you write it down that is an input or an output. I’m not saying you would use redirection but would
be more meaningful than
maybe recorder and player would be better
|
Tony Noble (1579) 62 posts |
audioIn and audioOut? |
Jeffrey Lee (213) 6048 posts |
But if it remains plugged in, and the user doesn’t reboot (or do anything else which might cause the device numbers to change), it can be a useful aid in order to help the user tell the difference between two identicaly-named devices. I think there’s been a bit of a misunderstanding here. AIUI Dave’s suggestion was for providing a way to differentiate devices when the device name is displayed to the user. E.g. in a piece of recording software it might list the following input devices: System Line-in If you plug in another USB device it may turn out to share the same chipset & USB descriptors as the first device, so you’ll end up with two “ACME USB Mic” entries in the device list. So adding the USB device number to the description would be a way for the user to tell the difference between them, so that they can keep track of which device is being used for each track of the recording. If we’re implementing everything via DeviceFS then we could just as easily display the DeviceFS entry name at the end of the description instead. There aren’t any callbacks made available to the client for when buffers fill/empty. Instead the client has to rely on the buffer filling/emptying upcalls – which are broadcast messages (UpCallV), and so won’t scale very well as the number of buffers increases. Easily fixable by extending the buffer manager. The current Buffer_LinkDevice implementation only allows for one client at any time – and DeviceFS already uses that SWI in order to allow it to trigger the callbacks into the device driver. Unfortunately Buffer_UnlinkDevice doesn’t specify which client to unlink (all you do is specify the buffer handle), so some thought will be needed on the best way of extending the API (perhaps just add a new pair of link/unlink SWIs, and modify the old SWIs to call the new ones but with suitable parameters so that only the client that registered with LinkDevice will be removed when UnlinkDevice is called Where do you see SoundDMA and SoundControl fitting in? Ah, that probably doesn’t help :-) Briefly:
|
Rick Murray (539) 13851 posts |
Wouldn’t it be reasonable for an audio “device” to provide some sort of meaningful name and maybe take some parameters? { > lineinjack#44khz;stereo: } { < mp3encoder#$.song01;160kbps;vbr;stereo: } That would be, for example, recording from the line in jack (ie Beagle xM) as 44khz stereo (should be default, but given here by way of example), and pushing to an MP3 encoder which is encoding stereo VBR with 160kbps average, saving it as $.song01 on the currently selected filesystem. |
Colin (478) 2433 posts |
You have 3 usb devices plugged in all identical, USB1 USB2 USB3, and they create 3 audio devices Audio1 Audio2 Audio3 how does knowing that Audio2 is USB3 as opposed to USB1 better inform me? It tells me that they are usb, yes, but not which usb is which – I must be missing something. There is only 3 ways to work out which device is which
You would have to reinvent the ‘entryname’ in some form or other anyway if you don’t use devicefs. One possibility with devicefs is to extend the window you get with filer_opendir devices:$ . registering a program with the filetype could show information about the device – different pictures for different devices would be nice.
That’s all that is necessary. Just modify devicefs to stop hogging it and use a function for linkdevice which implements a vector. You then just register your handler with devicefs. You have to remember too that DeviceFS is not just data transfer via buffers. USB control transfers are done via DeviceFS_DeviceCall for example. Thanks for the overview of the sound system. The only thing I’ll say about that at the moment is you can’t assume that you’ll have an audio device present to provide the tick. The motherboard chip may fails because you’ve put a screwdriver through it by accident. |
Colin (478) 2433 posts |
Rick. As you probably know USB audio devices carry information regarding what type of device it is so what you suggest is probably doable – assuming the author of the device software could be bothered to be choose an accurate type describing the device. However it does make enumerating them harder. At present you can just use a wildcarded directory read. You can put anything in special fields – whether they will do anything is another matter but I would expect format information would be highly likely. If nothing else redirection or copying would be handy for fault finding just to see if you can get a noise out of the speakers. Ah Jeffrey there’s another way to determine which device is which send something to each in turn. |
Rick Murray (539) 13851 posts |
That’s the problem I’m having with MIDI. I can present multiple MIDI devices, but how do you tell which is which? I’ll need to find some code to reliably read the vendor ID. That will let me tell my Yamaha keyboard from the MIDI dongle, but it won’t tell the MIDI dongles apart from each other… It’s a shame USB didn’t specify some sort of UID in the design (though I’d rather expect cheap Chinese rubbish to provide the same hardwired “UID” in each device).
Given the way modern motherboards are designed, I’d expect damage to the audio chip to affect a host of random other things due to multilayer board routing. Oh look, your parallel port no longer ACKs and your IDE bus can only talk to the master drive… |
Dave Higton (1515) 3534 posts |
They did.
They do. |
Dave Higton (1515) 3534 posts |
You’re not missing anything. But if we don’t differentiate them in some way, and address them by name (thus with identical names), there seems to be no way (a) to guarantee that all commands/data will go to the same device (they will only do so if we pick the first match from a consistent list), and (b) to use any other than the first match. Given two or more identical USB devices, what does DeviceFS give us to differentiate between them that is better than a suffix of e.g. “(USB3)”? If the devices really do have unique serial numbers, we can get them and use them. But don’t count on it. Cheap Chinese really is cheap Chinese. |
Colin (478) 2433 posts |
I think I’m not explaining myself in this regard – or maybe I am and you don’t agree which is fair enough. USB3, USB4 etc are essentially random numbers allocated to a USB device when it is attached. The USB part of the name identifies it as a USB device. Thats fine. When you have a USB handle you can use it to access a USB device. A user of the audio system will not be accessing the device with a USB handle the USB handle will be used by the Audio backend the fact that it is USB should be invisible to software using the system. The Audio devicefs will initialise backends and when that backend indicates that it has a device attached the frontend (devicefs) will give it a unique number. So you get Audio1 Audio2, Audio3 Audio4 for a motherboard device, a pci device and 2 usb devices. Each device gets a unique identifier even if the devices are identical. Audio software using the system works exactly the same which ever backend is used only the user needs to recognise the device and for them it may help to know its origin. Audio software can then enumerate audio devices by doing a OS_GBPB with ‘Audio*’ wild card. It can then ask each device for information that a human can use to identify the device. So in the program’s menu you have Audio1 – Motherboard – description read from device or made up if necessary where the bit after the first – is a text string read from the backend so that it can add what it likes to the description it reads off the device and present it in a standard format. If audio3 and 4 are identical knowing that they are USB2 and USB15 doesn’t help. |