How to do USB audio
Pages: 1 ... 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 ... 52
jim lesurf (2082) 1438 posts |
I’d assume that using the ‘handle%’ approach would allow for more than one device in use at a time. Also the ‘system device’ could become viewed as akin to the ALSA ‘default’. i.e. the device used unless the program specifies something else. And at some point we already face the question of how USB Audio device use is going to be integrated with the existing use of mainboard hardware… which at present assumes a ROSS model where all output is ‘mixed’ by SharedSound to play out via just one device! (Not something I personally regard as ideal, but I can see it makes sense for many users.) TBH as it stands in tests I can see using the USB Audio device for playing decent music and leaving the existing internal hardware via SharedSound for any beeps or clangs as being fairly useful – the snag being the absence of integration between the existing internal arrangements and the new USB. A problem if you want to use !DigitalCD or !PlaySound via USB… Jim |
Dave Higton (1515) 3526 posts |
Working towards a public beta… Colin, are you going to make your USB module(s) available to the public to try? I can make my USBAudio module available, but it’s worthless without your stuff. Also, a few days ago, I tried unplugging a USB audio device while it was playing. The Iyonix immediately stiffed completely – no mouse or keyboard. It would be nice to have a solution for that. |
Dave Higton (1515) 3526 posts |
What I’m trying to achieve at the moment is a decent level of abstraction to make USB audio device useful to application (or system) programmers without having to learn anything about USB or USB audio. That’s a good level to reach. That level also makes it easy to integrate USB audio into the computer, either alongside or in place of the existing internal arrangements. |
Dave Higton (1515) 3526 posts |
If we integrate USB audio into the computer as the standard sound system, we’re going to have to be careful of something (that may well turn into the plural!) – while you can change the sampling rate of an audio chip on the fly, you can’t do it to an open USB stream – you have to close it and re-open it with the new sampling rate. |
Colin (478) 2433 posts |
Jess. Unless I’ve missed something I don’t think it does. Say you have 7 output devices you select 1 for the system and everything can use that ie programs use the system device by default. However just as you chose the device for the system to use you can configure individual programs to use a different device bypassing the system altogether. System signal processing can be made available separately and can be used by an app if it wants to use it. Programs see the system device as a device the same as any other but it isn’t, its a proxy for a device. When a program uses the system device the data is routed through a resampler (if required), mixer and finally the device. This makes the system device the one used by people who don’t want to get their hands dirty. The list of audio devices includes the system device for you to choose from in programs. As I think about it, unless the app is device specific it is not up to the app to choose the device its going to play on for you. The API for the programmer would be the same whichever device was used so Music programs would allow you choose the output device which wouldn’t be system if you didn’t want other sounds to go to the HiFi – or any bit mutilation for that matter. I’d be interested in a scenario you (or anyone) thinks it doesn’t cover. |
Colin (478) 2433 posts |
Dave
that is in the fmt% (format) block of Audio_Open
First of all I don’t think you should release USBAudio generally until you/we nail down the API no-one wants to write a program using a changing API. Anyone is welcome to download the usbmodule I distribute and I wish more people would (its a bit disappointing they haven’t – but its pointless unless using an iyonix. You can distribute it with any distribution you want to make if you like. The more people testing the better. I’ll post the latest version later. It does have an API change it now uses samplerate and samplesize. for audio thats samplerate=frequency and samplesize= number_of_channels*byte_size_of_channel ie 4 for 16bit PCM. |
Colin (478) 2433 posts |
Not really a problem don’t allow frequency changing on open devices. |
jim lesurf (2082) 1438 posts |
That makes sense to me. In practice if someone is playing a sequence of different files that have different rates we can expect them to be distinct items. So a short pause to change rate should be fine. This is different to playing a sequence of files that have the same rate. There, having no gap or pause (or click!) is desirable as the music may be intended to be continuous. Jim |
Dave Higton (1515) 3526 posts |
I’d rather that interested people do have a go, and feed back how the API could be improved. I do not profess to be clever enough to know the best approach. Clearly you and I both have some interesting ideas; I hope more will turn up. I think I ought to describe it as a public alpha rather than beta. |
Jess Hampshire (158) 865 posts |
Most computer sound systems I’ve seen you can choose between different devices from the program. I’m suggesting an extra layer in between, so that the programs don’t need to do this. And the choice is all done in configure (and any change in hardware is also irrelevant to them) |
jim lesurf (2082) 1438 posts |
I think that in principle that would be quite consistent with having a user-choosable ‘default’ device whilst also allowing any specific indivudual playing application to be set to use something else. e.g. With my Linux boxes I set the USB DAC I’ve plugged in to be the ALSA default. But can still play out via other routes using a program that I’ve set to use something else for a specific reason. Makes it possible to run two outputs at the same time from two different sources/programs if required. With ALSA you can set these things using either a hand-edited file, or use some of the control software. The snag of course being that too much of this can easily bewilder the user. As indeed, happens with Linux and other OSs! :-) All the above said, I’m a bit wary at this stage of allowing “The best to be the enemy of the good”. i.e. In the end we do need a flexible system that covers both USB and internal hardware and gives flexible user choice. So that needs to be in mind when producing an API, etc. However at present we are at the stage of only having USB Audio working on one example hardware – the Iyonix. So it may be wisest to go via some preliminary stages which are themselves functional before that end state. Its impressive that this works on an Iyonix, and shows that USB Audio can function via RO. But that as a ‘tactical’ matter I suspect we need to get something similar working on other hardware so that developers of modules like PlayIt / DiscSample and apps like PlaySound / DigitalCD will want to take into account and use. Having been waving a flag for USB audio for some time I tend to feel that working examples/demos help people see the point. Plus giving them more of a feeling that something is worth their effort/interest. I’d agree though that the end-ideal may be a USB system that becomes a part of the existing ROSS with no particular distinction so far as modules akin to PlayIt and apps like !PlaySound are concerned. But that presumably means many more ducks in a row than having a player work by using a set of USB modules via SWIs specific to USB that can be used by a method quite distinct from the existing SoundDMA/SharedSound/etc. The worry being that it may not reach a useable stage for much longer, and needs input from people at all levels from app – driver – modules – etc to buold the entire chain. So I think the path may need to go though a couple of stages on the way: 1) Become functional on more hardware like the RPi, PandaBoard, etc. 2) Have a clear API be useable my modified/new versions of things like PlayIt / PlaySound at the driver and app levels. Personally, I suspect a ‘USB direct/specific’ API would be fine anyway as it could be done without having to immediately solve and impliment integration with the pre-existing ROSS arrangements. And again my Linux experience is that many audio applications give the user a choice of using various ‘direct’ or ‘indirect’ routes. So I can quite imagine a playing program with a user menu that included a choice between USB device(s) and the inbuilt hardware. It seems common enough for programs like Audacity, Audacious, etc, to do this. One choice is a ‘default’, others are more specific. So I could see a ‘USB device #N’ option. However the main strategic requirement would be to ensure anything like this along the way didn’t paint itself into a corner and make it impossible to then link into an extended system later on. Jim |
Colin (478) 2433 posts |
What I want is the OS or any other program to go nowhere near my precious device unless I tell it to – OK Jim’s turned me into a digital audio nut :-). Given that as my starting point I’m happy to let it use a motherboard device automatically with an option to change it if I want to. The alternative is what? First of all I would describe Hifi device, headset device etc as just proxies for some actual device. I have limited the OS to 1 proxy ‘system’ you want to introduce multiple proxies that are audio function related ‘system’,‘hifi’,‘headset’ etc. This would result in having to configure multiple proxies even if you don’t have such a device and once you have configured them the actual device may disappear – a problem with any system used. In short I don’t like moving the configuration away from the program I’m using. Say I have a music player program it uses the Hifi device as a default but the hifi device has been unplugged so when I run the program nothing happens. So instead of configuring the program for another device I have to go into config and change the hifi device which I have to remember to change back to my normal hifi device when I’ve finished. The program could allow you to change the device it uses without saving – nice for comparing devices. I’m not saying I’ve got this right, its open for discussion I hope to be convinced otherwise :-) It is possible to have both. |
Dave Higton (1515) 3526 posts |
That’s why I’m concerned to get the layering correct. USBAudio should allow use and control of the devices. It’s up to another layer to do any required integration into the larger system. |
Colin (478) 2433 posts |
Having thought some more I think we have to design a new sound system for risc os – not just USB. The existing sound system caters for a single device. So to allow existing programs to work unaltered that can’t change. ie you can’t make existing programs select a different device. So to make the existing system a bit more flexible you can make the device it uses selectable and configurable. So you rewrite the back end of the current sound system so that it uses a a device selected from a new Audio System. This will ensure legacy support. The new Audio system brings together devices from anywhere, not just USB, under a common API. It can start with just USB but you’d be able to add motherboard devices, PCI devices anything else you can think of. You would even include proxies so a proxy device like ‘system’ gets listed under *audiodevices just like any other device for a program to choose. Its not going to all happen at once and may never all happen but if we make a start with just a USB Audio device module registered to an Audio interface module then the application programmer only has to use the Audio API and support for non USB devices can be added later without affecting the programs. It makes it fairly futureproof I think. |
Colin (478) 2433 posts |
Ok Dave the latest version is USBModule.zip Just to remind you its changed from just samplerate in the specialfield to samplerate and samplesize (eg 44100 and 4 for 16 bit 2 channel 44100 audio) You may like to look at IsocPlayer – which now ups the bit resolution from 16bit to 24bit if required. One discovery that may interest you is that at least one of Jim’s devices returns an error if you try to read the set frequency so you can’t check the actual frequency selected after you change it. In case anyone is interested in trying it I’ve included all USB modules and DeviceFS so it will work on any Iyonix regardless of which OS verson you are running – famous last words. To install the new USB stack double click on !USBModule a taskwindow will appear listing modules it will pause while devices are reinitialised then finish the listing. You are now ready to go. You don’t want to be using USB discs while you run !USBmodule as they will be disabled – everything USB will. For anyone wanting to try, IsocPlayer will play PCM wav files – its just a test program. To get it to work may require details of your device – I can help with that – otherwise you just have to use PROCplay(filename) at the top of the program – just like I have to play a PCM wav file (eg one ripped from a CD). Having modified IsocPlayer Double click on IsocPlayerRun and it all happens in a taskwindow – multitasking isn’t great but thats application space for you. As long as the buffer doesn’t empty it should play without crackles. Given the number of people who have tested it as Dave said it can only be classed as Alpha |
Dave Higton (1515) 3526 posts |
OK, there’s a small archive at http://davehigton.me.uk/Audio/USBAudio.zip This consists of a USBAudio module and a UAPlayer application. If you want to try it, you’ll need Colin’s current set of USB modules, as in the previous posting; you’ll need an Iyonix; and you’ll need some kind of USB audio device. I’m not sure how old a version of RISC OS you can use; Colin will have to tell us. The UAPlayer application is not suitable for a real product; I only wrote it as a test bed for developing the module. To use it, simply drag a WAV file to its writable icon; it will start playing immediately. You can’t stop it except by quitting the app. If you drag the window larger, you can see icons that show the sample rate, interface, alternate and endpoint in use. If you haven’t gathered already, I’ll spell something out for you all: the USBAudio module API is certain to change, so don’t splatter copies of this version all over your hard drive, because you’ll need to delete them all as new versions become available, and when it finally becomes a standard component of RISC OS – which it will, one day (hopefully soon), free for everyone (as in speech and beer). Please, try it, try programming with it. Above all, please give us feedback! |
Colin (478) 2433 posts |
My USBModules should work on any version of RISC OS on an Iyonix. Trying a 16bit 44100 file failed with "USBAudio_OpenOut failed both played with my IsocPlayer you may like to look at that. The 16 bit may have failed because the device is 24 bit only |
Dave Higton (1515) 3526 posts |
Interesting. “Not PCM” has to have come from the device – it’s not an error the USBAudio module generates. What is one supposed to do with a device that doesn’t support 16 bit audio? Pack the samples out to 24 bits? |
Colin (478) 2433 posts |
:-) You can try it out with this naim test files I tried the hi definition wav. There is another problem from programs which ‘clean up’ wav files – probably to get them to work with programs that make the wrong assumptions – that when the format is WAV_FORMAT_EXTENSIBLE (&FFFE – WAV_FORMAT_PCM is 1) they truncate the "fmt " block to 16 bytes to make it look like a PCM "fmt " block but forget to change WAV_FORMAT_EXTENSIBLE to WAV_FORMAT_PCM So PCM can be WAV_FORMAT_EXTENSIBLE type with a length of 16 16 to 24 bit conversion is all you can do. The DACs I’ve seen which to 24bit only do 24bit. The device I bought claims to be 16/24bit in the literature but is actually just 24bit. I just padded 16 bit files with 0 You can use PROCasm from my program if you want – save you the effort. The assembler is called with
You’ll need to ensure the source contains whole samples I just had the buffer big enough for the converted block and loaded the source at the back end of that block. |
jim lesurf (2082) 1438 posts |
Yes. Almost all the high-spec DACs I have take for granted that you’ll give them 24bit samples. I get in a muddle in terms of what this means as I can never remember if it means padding the least significant byte of negative values with ‘all ones’ or just with a zero byte like the positive! :-) I’d echo much of what others have said wrt how to proceed. My feeling is that at present we should first establish a ‘stand alone’ USB audio playout system with a reasonable API. This will allow others to try it and we can then proceed to decide the best strategy for implimenting an ‘integration’ with the rest of the ROSS. I suspect this will actually be best served by having two distinct APIs. One direct to USB, the other via an integrated API. There seem to me two advantages of this: 1) We can get a functional USB DAC system working sooner so people can make use of it and become familiar with what it can provide. Thus giving people practical reasons for wanting to focus on joining in developing an integration. 2) It allows for the more serious audio nuts to have the most direct and simple ‘pass the parcel’ playout. For this having added modules, volume controls, etc, is not ideal as you have to spend some effort trying to stop them altering the output. As I’ve said in other contexts, there is a basic conflict between what is required when people want to ‘mix’ and hear multiple sources out of one port and with having assured no-tampering max quality. Users need to be able to chose which they want – or even to have both with USB direct for some purposes, and allowing another port to mix out others. Jim |
jim lesurf (2082) 1438 posts |
Guilty M’lud. :-/ My “Wav_Cleaner” app makes this error. But in my case it was an oversight I didn’t notice because players I’ve used all play the results without a fuss. I’ll fix this, but people should note that various programs for working with audio files can add/include the ‘extended’ metadata/flags when you don’t expect it. I think sox may do this with 24/96k files, so beware. Jim |
Colin (478) 2433 posts |
I had to resort to thinking about things in terms of ‘speaker’ and ‘microphone’ to preserve my sanity. Input and output are device centric and when you are lookin for an output connection for the computer you are searching the input connections of the device. Well my inputs and outputs were going everywhere ‘speakers’ and ‘microphones’ brought back calm. |
Colin (478) 2433 posts |
That way doesn’t futureproof programs. The old system will never be an integrated API. Programming USBAudio direct will mean your program will not work with non USB devices and if some future device bus comes along it won’t work with that either. An integrated API doesn’t mean you lose the ability so send data direct to a device including motherboard devices An audio API would only be an API to access the device directly ‘system’ sound would make use of this API not be part of it – its totally independant of the existing sound system and doesn’t even need to be distributed by ROOL except that if they decided to use it it would increase the chances of it being used. |
WPB (1391) 352 posts |
‘source’ and ‘sink’? Any good? I’ve only had a superficial read of this thread (don’t have an Iyonix – shame) but great work, guys! |
jim lesurf (2082) 1438 posts |
I appreciate that (I think!). However my thought is that what I call the ‘direct’ approach is one which specific programs/drivers would be written or modified to offer. Once that is done the need for an integrated approach seems to me to fall into the court of development for the SharedSound, SoundDMA and SoundControl modules. AIUI at present all the ROSS audio ends up being expected to use SoundDMA – preferrably via SharedSound. These are part of the common OS so would need to be developed on that basis in a way that ROOL are happy with. How that would work is, I guess, a matter for a wider range of people – mainly ones like Jeffrey Lee, etc. But they could then decide to modify, say, SharedSound to pick up the then-working ‘direct and specific USB’ calls, or arrange for others to be produced. I don’t see (as yet) that this would mean the ‘direct’ API would have to be removed. But I certainly agree that in the end only having a direct and specific USB interface independent of the rest of the ROSS would be unsatisfactory. However I think we need that working as a step first. Otherwise I fear we’ll fall into “the best is the enemy of the good” trap, and people will produce ad hoc ways that simply cloud the later resolution – or worse, progress in allowing people to use USB DACs will stall as we wait. Maybe I’m just over-anxious as I’d been waving my hands about trying to raise attention for USB DACs for years before it was taken up. I don’t want that to now stall because people feel too many things need to be done first as only a ‘perfect’ solution will do! As a slight aside I guess this is why I also feel having the DeviceFS/TaskWindow interface is very useful. It may be an awkward ‘fragile’ method on lower-powered machines. But the fact that it works on the Iyonix implies that it will be rather less fragile on newer, much faster, hardware. And so it is very useful as a simple way for demos, just playing some music, etc, before we have anything like !PlaySound or !DigitalCD being able to send their output to the USB DACs. Helps people to hear the point of the effort for themselves! Jim |
Pages: 1 ... 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 ... 52