Proposed GraphicsV enhancements
Pages: 1 2 3 4 5 6 7 8 9 10 11 ... 13
Jeffrey Lee (213) 6048 posts |
This page details what I currently have in mind for improving GraphicsV. As the page states, the main goals are to provide an API for exposing the features of the OMAP3 video controller, and to provide proper API-level support for multiple displays (including hotpluggable ones like USB DisplayLink devices). Feedback in this thread is welcome – but please don’t edit the wiki page itself, as there’s a good chance I won’t spot the edit when I write up the next version of the doc (which is likely to provide a full specification for the new APIs) |
Steve Revill (20) 1361 posts |
Hi Jeffrey. We’re not ignoring you right now, but we have a (non-RISC OS) project going on that is keeping us super busy for a few days so we’ll have to get back to you at some point in the near future about things like this. |
Jeffrey Lee (213) 6048 posts |
No problem – there’s plenty of stuff for me to do with the video driver before I’m even ready to start playing with the fancy stuff :) |
Jeffrey Lee (213) 6048 posts |
I’m slowly moving forward with the next version of the doc. I’ve worked out some of the details with how programs can claim ownership over overlays, and have come to the conclusion that a tag/value list/tree is pretty much the only way to represent the capabilities & configuration values of displays/overlays. But although the API for setting configuration values was easy to come by, I’m still struggling trying to work out a sensible API for querying what the current configuration is and what all the different options are. In a few days time I’ll hopefully have a better doc uploaded with less waffle and more actual designs/specs. It’ll still be missing a few important pieces, but it should be a bit easier to read! |
W P Blatchley (147) 247 posts |
Looking forward to reading it! |
Jeffrey Lee (213) 6048 posts |
Version 2 of the doc is now up. As you can see I decided not to go with the tag/value list/tree system and go with a table instead. The table approach should make it easier for programs and drivers to deal with the data, even if it introduces some memory wasteage (e.g. even though they aren’t used, each overlay will still have a cell for storing timing parameters) I’m still not even sure if this approach is a good idea; trying to come up with an API that works for all display hardware, while exposing maximal functionality to programs, and preferably indicating why certain settings are invalid, isn’t very easy! This new doc still isn’t perfect; I have a feeling that most/all of the stuff to do with passing head/overlay numbers in bits 16-23 of R4 can be removed, in preference of using the table-based interface instead. I suspect the most sensible thing to do is to retain the old GraphicsV reason codes for use by the system – i.e. if a driver receives a SetDAG or SetMode call then it knows that it should modify the head/overlay that’s being used by the system to display the desktop. Specifying the head/overlay number on a per-call basis is therefore not required (as long as one of the new reason codes is used to tell the driver which head/overlay the calls should map to) Also I haven’t gone into much detail as to what the table actually looks like in memory! Another big issue that needs resolving is memory management. I’ve only really just started to look into that now, but in order to sensibly support display rotation on OMAP3 it looks like drivers will need to be able to exert much more control over memory allocation/usage than they do at the moment. Luckily similar memory management modifications have been made to Select, so we should hopefully be able to base our API modifications around theirs. If anyone has any ideas on how to make the API or the doc better, please share them! Since the doc does propose some big changes I’ll probably hold on any further rewrites until people (e.g. ROOL) have had a chance to read and comment on it. And as always, there are still 101 other things for me to work on apart from this ;) |
Chris (121) 472 posts |
Just for info: there’s an article up on TIB (http://www.iconbar.com/articles/Easier_video_playback_on_RISC_OS/index1255.html) with some info on the ARM port of the Ogg Theora libraries. May be of interest to anyone working on an API for the new OMAP facilities. |
Dave Higton (281) 668 posts |
I read most of Jeffrey’s “Proposed GraphicsV enhancements” page last night. I don’t know if I missed anything, but I looked for any method of showing spatial relationships between linked displays and couldn’t see any. A human being has to be the original source of the information (e.g. display 1 is to the left of display 0), but subsequently the OS and applications should be able to look it up and use it. |
Jeffrey Lee (213) 6048 posts |
Good point. Adding spatial location info is certainly something worth considering, but it’s not something I’m particularly eager to add to this version. If you can spec out a good way for it to work then I can probably add the relevant support when I write the rest of the code, but I don’t think I’ll be losing any sleep over the design myself ;) |
Dave Higton (281) 668 posts |
How about this: Each display has four numbers associated with it, being the numbers of the display to the left of it, to the right of it, above it, and below it, or -1 if there is no such linked display. There’s an implicit assumption that linked displays have the same resolution. If they don’t, it might be excessively difficult to link them anyway. |
Jeffrey Lee (213) 6048 posts |
Which is one of the reasons I’m not keen on coming up with the API myself ;) I haven’t played around with multi-monitor setups in Windows much, but I do know that it supports displays of differing resolutions, and allows you to specify the alignment of the displays. The displays span to each others edges, but I’m not sure of how/when the layout changes if one of the displays changes resolution, rotates, disconnects, etc. |
Dave Higton (281) 668 posts |
I chose the words “excessively difficult” to imply that it was too difficult and therefore you wouldn’t attempt it. At least, not at this revision. It’s OK if the adjacent edges of adjacent linked displays have the same length, regardless of the other dimension. The test has to be performed when the operator tries to tell the system about adjacencies. If they don’t fit, then the linkage information won’t be accepted. |
Jeffrey Lee (213) 6048 posts |
I’m hoping to start implementing all this stuff sometime next year. However it’s a big bunch of changes, so it’ll need splitting up into smaller chunks. My current thoughts are to implement things in the following order:
The first two items on that list are relatively simple and don’t need much more (if any) design work doing. The overlay support item is still very much up in the air, however. I might try coming up with two levels of API – a simpler more lightweight one (e.g. “give me a YUV overlay with these dimensions suitable for playing video in the desktop”) and then later on the more complex one. Of course we may find that the simple API is all that we really need to implement, and that no application developers are in need of anything more complex – in which case we can ignore it entirely. The new pixel formats item already has a good design, but could probably do with revisiting to make sure it covers the capabilities of the Pi & OMAP4. Other relevant discussions for me to bear in mind: |
Jeffrey Lee (213) 6048 posts |
Another thing for me to remember to sort out: Mode 7 support. Currently there are a couple of build time options for whether it’s uses a high res (640×500) or low res (320×250) mode, and whether it’s 16 or 256 colours. This isn’t great when it comes to supporting arbitrary graphics drivers at runtime, so ideally we’d want to get rid of those options and make the kernel adapt to whatever the current graphics driver & MDF support. |
Jeffrey Lee (213) 6048 posts |
Long and rambling post alert! Status updateI’ve finally started work on this(!) No code to check in yet (CVS is still frozen for the pending 5.20 release), and no document updates yet (the API is now evolving as I implement it, rather than me trying to design it all ahead of time), but my work so far consists of:
Overlay thoughtsFor the overlay system, I’m thinking of completely getting rid of the “big ugly table describing possible configurations” approach and instead going with a “resource” based approach. I.e. rather than the driver describing all the overlays to the client and allowing the client to pick whichever overlay it wants, the client has to ask the driver for an overlay with certain attributes and it’s down to the driver to decide which overlay best fits the bill (if any). Once a client has requested an overlay, many of the attributes (pixel format, framebuffer width/height, transparency mode, rotation) will be fixed; if the client wishes to change them he has to release the overlay and request a new one. Only simple things like on-screen position & scale factor can be changed dynamically. This approach is perfect for the Pi, where we use a similar API to request overlay resources from the GPU. I’m also likely to make it so that it’s permissible for the driver to release any/all overlays if the desktop mode changes, or at any other time the driver sees fit. This will allow the system to more easily cope with platforms with complex restrictions (e.g. OMAP3), even if it inconveniences clients somewhat by forcing them to (e.g.) fall back to software YUV decoding if a desktop mode change means that they’re no longer able to create a YUV overlay with the parameters they want. What’s nextAlthough I did start off by adding support for multiple drivers, I’m not doing things in quite the same order as the list I made a couple of posts ago. Specifically, the multiple driver support is still only half implemented; some aspects of it (like a service call on driver registration/deregistration, and a driver enumeration API) are still missing. I’m also missing two crucial calls that the OS will make when it starts & stops making use of a driver with the VDU driver – at the moment when the OS stops using a driver the display is still left on, when really it should turn off (assuming there aren’t any other overlays left enabled). I’m also not quite ready to start on the head & overlay support. So I’m going to skip ahead a bit and starting work on adding support for some new pixel formats, primarily the 16bpp 565 format. Compared to some of the others this should be pretty easy, and will help both our Select compatibility and Pi & OMAP3 compatibility. It should also allow me to produce a good list of places which will need updating when adding support for the other formats. ArchaeologyToday however, I’ve mostly been trying to track down information on some of the more obscure features of the OS video drivers: VIDC20 gamma table formatFor 16bpp modes, the gamma table supplied to GraphicsV drivers is in a VIDC20-specific format which needs demangling for use on most other hardware. I was hoping I could change things so that the table is supplied to the driver in a sensible format, but apart from the default gamma table, it looks like it’s not the kernel’s fault – PaletteV accepts the mangled form directly, so any changes would either have to involve changing the specification of PaletteV (breaking any apps that use 16bpp gamma correction) or adding some demangling code to the kernel. However during my searches I did find this newsgroup posting which describes the gamma table format, which will save me a bit of hassle when it comes to implementing the 565 support (although I suspect it should state that that blue comes from bits 8-15 of the pixel, not bits 0-7. And I’d expect the top 4 bits to be used for the “ext” supremacy/transfer palette lookup, judging by the way the kernel builds the default gamma table) Interlace hellAt the GraphicsV level, there are three ways of controlling interlace, and they aren’t handled in consistent manners by all drivers:
VIDC20Driver, which is based around the HALified VIDC20 driver that was originally in the kernel, uses GraphicsV 3 and the sync/pol flag to decide whether interlacing is used. Since this is effectively a reference driver for how video drivers should act, I’m recommending deprecating the control list item, especially as it’s something STB-specific which was never fully integrated into the kernel (support for interlace via two separate framestores) The only problem with that approach is that the NVidia module ignores the sync/pol flag and instead uses the control list item (+ GraphicsV 3). But that should be easy enough to fix. It’s also worth pointing out that ROL went down the control list item route for specifying interlace. It also looks like they’ve added a control list item for the vertical offset, unlike Castle who went with the approach of directly tweaking the vertical timing values. Unfortunately the ROOL & ROL allocations clash, so going with an alternate approach of dropping the sync pol flags in favour of the control list item isn’t going to aid interoperability, unless we also change our allocation numbers to match (which may not be a big problem, since both of the clashing entries are STB specific for us) “Supports interlace with progressive framestore” flagThis flag is also related to the STB Interlace module. If set, it means that everything’s fine and no special hacks are needed. If not set, it means the module needs to do its thing whenever an interlaced mode is used. Only problem is, VIDC20Video and NVidia don’t set the flag! This is presumably an oversight, and a good example of why new flags should be added in a backwards-compatible manner (the flag should really be “interlace requires two framestores” or similar) “Uses separate framestore” flagYou’d expect this flag to indicate that GraphicsV 9 is implemented, especially as the flag was added to the GraphicsV docs at the same time as support for GraphicsV 9 was added to the kernel. But the kernel doesn’t seem to make use of the flag, and the NVidia driver (which was likely being worked on at the same time, considering the kernel changes were attributed to the Tungsten project) doesn’t set the flag. Perhaps another oversight? Or the flag was obsoleted before it even found its way into CVS? It’s a bit of a funny one, since the NVidia sources even make reference to the flag (although we don’t have the full source history visible, so it’s hard to tell when that reference was added). The “interlace with progressive framestore” flag is there too! |
nemo (145) 2528 posts |
Yes, and no.
No it doesn’t. Not the gamma tables. You’re thinking of the palette, which is a separate concept at the PaletteV interface level. The 8bit gamma tables provided to PaletteV,9 (and the 16bit ones to PaletteV,256) are combined with the palette (PaletteV,2/8) before being used by the hardware. Ideally GraphicsV would preserve this separation as modern hardware has 12bit or 16bit LUTs which produce much better calibration results. I should also point out that in the presence of the Monitor module, any tables passed to PaletteV,9/256 are multiplied by the configured calibration curves. This allows programs that naughtily use PaletteV to fade the screen to do so without destroying the user’s calibration. Magic words in R3 in the call to PaletteV,9 control this. Edit: Actually PaletteV,256 uses 1.16 fixed point values |
Jeffrey Lee (213) 6048 posts |
For 16bpp modes, the gamma table supplied to GraphicsV drivers is in a VIDC20-specific format which needs demangling for use on most other hardware. Which bit is “no”? :) PaletteV accepts the mangled form directly Yes, you’re right. I’d missed the code that was combining the VIDC-format palette with the ordinary gamma tables. Thanks! Last night I also realised there’s another reason for changing away from VIDC’s gamma format for 16bpp modes – if you were to demangle a VIDC 5:5:5 gamma table you’d only end up with 32 entries, not 256. This is fine for VIDC where the gamma is applied directly to the 555 desktop overlay, but not ideal for systems like OMAP where the gamma is applied to the output of the overlay mixer – i.e. a 24bit 8:8:8 colour, irrespective of what pixel format the desktop is using. So apart from changing the code to not pass VIDC-format gamma tables to GraphcisV, we also need to teach the OS whether the gamma tables are being applied to just the desktop or to the output of the overlay mixer. |
nemo (145) 2528 posts |
At the risk of reiterating… VIDC doesn’t have “gamma tables”, it only has a hardware palette. That’s an implementation detail, the PaletteV interface has logical palette and gamma tables as separate concepts, and that’s necessary. The Monitor module implements an additional class of table by separating the gamma tables into calibration and “effect” – so the “effect” gamma is modified by the calibration before being passed via PaletteV to the hardware. On RiscPC that was via PaletteV,9, on the Nucleus (with the Imago motherboard) it was via PaletteV,256. GraphicsV ought not to have inherited the VIDC implementation detail of combining the gamma table with the palette. The gamma table needs to be kept separate so that it can be put directly into the gamma LUTs of hardware that has them. Note that there is also potentially a gamma curve for the alpha channel.
Ideally one would want to control the gamma of every overlay separately – it is conceivable that the desktop is intended to be at 1.0 while an overlaid video would be at 2.2. The Monitor module also implements the only known gamma documentation call in RISC OS – so that applications can sense what gamma the display is calibrated to and act accordingly. As it happens, the only applications that could need care are likely to mandate a linear calibration (that is gamma 1.0, not linear tables) for code efficiency sake. Doing gamma-aware graphical processing is complicated and expensive, and nothing in RISC OS gets it right. In particular, things that get the output wrong unless the display is calibrated to 1.0 gamma include FontManager, ColourTrans, ChangeFSI, PhotoDesk… in other words, everything. I don’t expect that to be changed out of the box, but it would not be good to break the one mechanism available for fixing it. |
Rick Murray (539) 13805 posts |
Well, wouldn’t ColourTrans be the logical first place to start to repair this discrepancy? One could argue that CFSI/PhotoDesk either use ColourTrans (which is broken) or implemented their own because of ColourTrans’ inabilities and…oops…got that wrong too. Or, I guess, lazy coding, the more viable excuse – where a xoxoxo pattern is “acceptable” for 50% grey, even though reality isn’t (ever) that tidy. |
nemo (145) 2528 posts |
NO! That is to say, it must either be fixed (in software) everywhere (so the display can stay at 2.2ish but everything gets pixel coverage/alpha blending/palette matching correct) or fixed nowhere (so the display must be calibrated to 1.0 to fix everything in one go). Having some applications (or even some code paths) gamma aware while others are not would be a nightmare.
No. Most stuff gets it wrong because the authors had no idea what they were doing (in this area). Photodesk and Vantage specify a 1.0 gamma because their authors did know what they were doing but didn’t want the overhead of gamma correction added to every pixel manipulation – it’s expensive. There are corner cases – Browse’s PNG handling took notice of the PNG’s gAMA chunk, but assumes that RISC OS must be 2.2 gamma and adjusts accordingly. Unsurprising – in the absence of the Monitor module there’s no way to know what gamma the machine is calibrated to. This problem, extended to all platforms, is discussed at length here. Vantage did use the Monitor module supplied gamma value when writing PNGs, so they would appear correct on other systems (but incorrectly in Browse on the same system!). Hi ho.
Well the gamma problem (that is to say the mistake made by ChangeFSI, FontManager, ColourTrans etc) is the incorrect assumption that 50% grey == 128. In other words, that 50% luminence is achieved with a 50% colour value. That just ain’t true, unless you have a 1.0 gamma. |
Rick Murray (539) 13805 posts |
Okay, now in reality would ColourTrans be the place to start? We aren’t going to automagically have everything fixed at the same time; and it seems to me if the OS itself can’t get it right…
…which might have been a valid point on < 100MHz systems with a snail-speed bus. It’s been a long time since then, plus packages on _other_systems managed on older hardware not because the x86 is magical and can afford this sort of thing without computational penalty, but rather because the holy grail is to have scanner, screen, and printer all produce the same result. Anything less is “domestic” (where we put up with crap like the sRGB profile for printers because the manufacturers can’t be bothered); and if the setup can’t even manage a picture that looks the same on another, different, machine, it’s just a toy. All of my blog image work is done on the PC. Not because of ease of connectivity (photos from phone via bluetooth, etc), nor because of the ease of use of the graphics package. Sure, those are huge plus points. However the primary reason is that there was a notable difference between pictures handled under RISC OS when viewed on a PC. I don’t recall if they were too light and washed out, or too dark like any number of television programmes that want to hide Toronto standing in for <everywhere>. I think they may have been too dark. One day many eons ago I went through and fixed most of them.
Mathematically possible, but physically? |
Jeffrey Lee (213) 6048 posts |
After weighing up a few different options, I’ve updated the extended framebuffer format specification with my thoughts on how type 3 VIDC lists (and therefore the HAL/GraphicsV mode vet/set functions) should be extended to allow the new pixel formats to be specified. If you don’t like it, speak now or forever hold your peace! |
Sprow (202) 1153 posts |
I’m not sure the “top bit set if control list item is new” idea is wise. The NColours aren’t optional from a vetting point of view (eg. asking for packed 24bpp on a controller that doesn’t support that should be declined). If the main reason is to beef up control list parsing then that’s manageable, since all current GraphicsV drivers are held here anyway, and adding even a top bit set one would be ignored by a new driver softloaded onto an old OS. My instinct would be to have a 32 bit word where 1’s represent those I understood (since there are currently only 14 control list items, expanding to 16 maybe?). |
Colin Ferris (399) 1809 posts |
Would any of these changes help in getting the Viewfinder/Vpod working with RO5? Has anyone tried getting the two podules above to work – with RO5/RPC? |
Jeffrey Lee (213) 6048 posts |
Looks like you’ve got the wrong end of the stick; it’s not “top bit set if control list item is new”, it’s “top bit set if control list item can safely be ignored by driver”. So in an ideal world, something like the DPMS control list item would have had the top bit set since the driver doesn’t need to pay attention to that item (every call to SetBlank specifies the DPMS state). But if we added something new, e.g. to control display rotation, that would have the top bit clear so that drivers which don’t recognise it can complain and refuse to accept the mode.
That could be one approach, yes. Certainly one of the problems with my idea is that it’s done purely from a driver’s perspective; what if some other part of the system needs to parse the control list and throw an error when unknown ones are encountered? If it encounters a new item, how would it know whether it’s important or not? So making the driver explicitly list which items it understood would be a good idea.
These changes would be a step in the right direction to getting the cards working, but there would still be a fair amount of work required to modify or rewrite the Viewfinder/Vpod drivers to use GraphicsV instead of their existing interfaces (Vpod would be using ROL’s VideoV, Viewfinder I guess would be hooking into the OS directly?) |
Pages: 1 2 3 4 5 6 7 8 9 10 11 ... 13