EDID discussion
|
The intent originally from this post was to report the progress on EDID. However the discussion has ‘evolved’ into a discussion about Multiple displays. In many ways this is useful. I don’t consider it part of the EDID bounty or work – just ensuring that there is as little reworking as possible and sparking debate. Jeffrey has clearly thought hard about the low level elements so it is probably time to think about the higher level stuff now. Monitors using EDID can be uniquely identified by model name and serial number. Those with MDFs will be allocated anyway :-). |
|
Do it right first time is the motto. I can see that drawing the line could be difficut.
For many installs the latter may be critical as people fequently buy two of the same. |
|
Get that sussed, you’ll be one up on Windows’ eccentric handling of USB ports. The other day, when writing some music, I plugged my keyboard in to the lower right port (as the TV capture card was in the upper right one). Cue the whole “New hardware detected” followed by installing the drivers (does this install a new set of drivers, or does it install the drivers on top of the existing ones and set up new registry entries?). Something that should be simple took depressingly longer than it should have. |
|
First time round it pulled the drivers from the install set, second time round it does something similar but only ends up modifying the registry to match the new location. Move the devices around and the registry has elements in common with Gravelly Hill Interchange (M6, you know the nickname) Anyway, do we have a problem with mouse, keyboard and boot drive all on USB and then swapped around at a later point? |
|
I think we should probably help William out here and split the “Multi-head support” conversation into another thread (maybe in the Wish List forum) if it’s not there already and allow this one to return to discussion of the basic EDID support for the bounty that’s in progress. I admit I’ve been as bad as anyone for dragging this a little off-topic but we’re starting to move a little far away from EDID now… |
|
That would be good :-D. Should I summarise the EDID thread then: DONE (ie. code with code@riscosopen.org) - ‘Established modes’ are in – this covers modes between 640X480 up to 1280X1024 in ‘industry standard’ frame rates. TO DO - Other mode data is not converted into usable modes yet but Steve R has pointed me in the direction of some code which will allow me to extract the principles (and improve my support for established modes’. TO CONSIDER The bounty listed four objectives. The above goes a long way towards the first; the other three need a degree of evaluation having made progress so far. - Mode evaluation / restriction – first consideration here is: ‘is this actually desirable?’ Criticism has been levelled at Select’s EDID implementation for artificially restricting modes based upon reported bandwidth. We could implement a GraphicsV call to supply a supported mode list, and then the ScreenModes extension could then vet the modes based upon the list. However, this would make it difficult for over clockers or others to operate outside the limits imposed by the OS. Not offering one may permit less knowledgeable users to ask for modes outside the boundaries of the device. I’m inclined to think it’s a good idea, but needs concurrent work on the graphics driver front to supply the data to ScreenModes to vet. I suspect we need to either make a table of XRes, YRes, Framerate. A Bitfield (a la established timings or indeed the ‘mode number’ on the Pi would work but would need maintenance to be expanded as extra modes were added – and would also be harder to be extended without changing ScreenModes). If many people are ‘Meh’ or ‘anti’ it then I’m happy to take this off the plan as I think the vetting bit may cause as well as cure problems. - Hot plugging – is a bit of a nightmare on current setups as stuff may go bang. Current support will allow this to be done with a * command issued whenever we want the monitor’s settings re-read. A service call would be more appropriate (‘a service display changed’). This starts to head into multi-monitor territory though (not least because I suspect the only devices it’s ‘safe’ to hot plug on are the same ones we’re aiming for multi-head support on), and given encapsulating the *command into a service call with ‘if service call received, claim it and call *ReadEDID’s code’) is going to be three or four lines extra in ScreenModes, I shalln’t lose sleep over that and will consider hot plugging ‘done’ pending API changes. I suspect for safety we should only be issuing that on hardware that supports EDID and is safe to hot plug. - ‘Display manager changes’: not AFAICS needed for EDID as ScreenModes does all we need for DisplayManager to catch up with, although extensive changes may be needed for multiple display support. - Reporting safe mode for the display: this is actually quite important. I can build some EDID translation into ScrnSetup to identify the preferred mode but that would be very superficial. Maybe instead we could get ScreenModes to report a ‘preferred’ mode for other modules / tasks to read (SWI ScreenModes_GetPreferredMode perhaps); this would then need *WimpMode to default to that mode (perhaps *WimpMode with no parameters should call ScreenModes_GetPreferredMode, and using this to display the monitor’s preferred display, and if that fails 640X480X60; that way MDFs can specify a preferred mode too). NEEDED - Pi EDID support. |
|
In terms of hot plugging – do we need much more than this? If a graphics driver supports hot plugging, then the intent would be that the code within *ReadEDID needs to be called. That will read the data, update the mode lists, and then change mode if required. The priority there will be: keep current mode (so if you pulled a display’s video cable out and plug it back in you’d get the same display); then the preferred mode for the display; then a ‘safe’ mode. That sounds fine to me.
Using multiple instantiation is a convenient way of allowing modes to be tested without messing with the loaded MDF. But I can’t see it working very cleanly if we used instancing to support multiple monitors. Instead I think we’ll have to change it so that one instance of ScreenModes manages the mode lists for all displays. ScrnSetup can still continue to create a new instance to try out mode changes, but for the majority of the time there’ll only be one instance of the module active. There’ll also need to be some changes made to the service calls which ScrnModes responds to, but all of this is a bit beyond the scope of your EDID changes. My only request is that you try and avoid littering the code with hundreds of global/static variables, so that I can easily update it to store everything in an array of structs, one struct per display.
That’s probably just a speculative item rather than a definite item. The only mandatory thing I can think of relating to the display manager is making sure that it updates itself correctly whenever the mode list changes (the display manager builds its own cached list of modes). There’s already a service call (Service_ModeFileChanged) that ScrnModes issues when a new MDF is loaded – so all you need to do is make sure the service call is also issued when generating a mode list from EDID. You’ll also need to make sure current_monitor→name is set correctly so that the display manager shows the right name in the title bar. The only other changes I can think of might be to add a new menu option or two to allow control over EDID use. At the moment you can load an MDF just by dragging it to the display manager’s iconbar icon, so it might also make sense to add the ability to easily switch from using an MDF back to using EDID (or to force EDID to be refreshed if someone’s hotplugging displays and they don’t trust the display detection to work correctly).
The ‘auto’ option in ScrnSetup seems to just provide a method to disable the use of an MDF. It sets the current monitor type to the configured monitor type (as stored in CMOS). What it does not do is enable the Archimedes-era monitor type auto detection feature, unless your configured monitor type is already set to ‘auto’. Since the current incarnation of the ‘Auto’ option is a bit pointless, I don’t have any objections to removing it, or repurposing it to act as a way of enabling the use of EDID.
If EDID is enabled, I think it should really be read during ROM initialisation, not when the monitor configuration file is read during PreDesk. That way if you’re starting a machine with a broken boot sequence or with a dodgy monitor (i.e. one which doesn’t like the builtin mode timings for mode 28) you should be able to get a usable display. I’m not entirely sure of the best way of handling this. If you look at the list of monitor types then you’ll see that “use an MDF” is a monitor type. So we could perhaps add “use EDID” as another monitor type. Then on startup ScrnModes can check if that monitor type is selected in CMOS and read the EDID as appropriate (note that monitor type 7 should currently never appear in CMOS – ScrnModes switches to that monitor type automatically when LoadModeFile is called). Or we could use the existing Archimedes-era ‘auto’ monitor type (stored in CMOS as type 15), as that setting is basically nonfunctional on RISC OS 3.5+ OS/hardware anyway. Or we could start using monitor type 7 in CMOS, with ScrnModes always using EDID unless an MDF gets loaded later on (a bit risky if we aren’t certain of how well our EDID implementation works) It might also be worth storing the ‘use the device’s preferred resolution’ setting in CMOS as well (perhaps tied to the configured mode CMOS byte? The OS still allows you to set ‘auto’ as the configured mode, so we could use that to indicate that the mode should be selected from the EDID). But I don’t have any objection if this setting is only stored in the configure file.
Yeah, there’s a bit of work needed here. Initially, just make sure that any modes you generate are accepted by the driver by generating a VIDC list and calling GraphicsV 7. Later on we’ll probably need to extend GraphicsV so that drivers can (a) indicate why a mode has been rejected and/or (b) tell ScrnModes exactly what their restrictions are. I.e. min/max pixel rate, maximum memory bandwidth, min/max width/height, width multiple, min/max porch/sync timings, etc. The more information ScrnModes has as to why a mode will or won’t work, the better it can tweak it in order to find a set of parameters which both the monitor and the GPU are happy with. Unfortunately there are sometimes quite complex relationships between mode timing values – take a look at the check_mode_timings function in the MakeModes sources for all the restrictions that apply to VIDC20. So my question for you would be: what would make things easiest for you? Options (a), (b), or both? (b) has the problem that it can’t really be used to describe any complex relationships (we’ll end up extending the list of constraints that ScrnModes understands for each new video driver we add), but by knowing most of the constraints itself it should allow ScrnModes to skip trying to generate large numbers of modes it knows will fail. (a) on the other hand could be used to indicate where complex constraints are, even if it can’t always convey how to solve the constraint. E.g. it could return a list of all the problems encountered – “pixel rate too high”, “complex constraint between horizontal sync width and horizontal front porch”, “unsupported colour depth”, etc. Another useful bit of feedback could be the actual pixel rate which will be used if the mode is selected – since the hardware might not be able to generate the exact rate which was requested. Useful when dealing with modes which are close to the limits of the monitor. Note that attempting to generate a list of all modes which the hardware supports is futile – for most machines the list would be too long to fit in memory.
I think it’s always going to be a scaled version of the primary output, but I haven’t fully investigated to see if there’s another way of doing things.
Unfortunately the detection signal doesn’t appear to be routed to the OMAP, so we’re still stuck with probing the IIC bus to work out if something’s there or not. But the good news is that the PandaBoard and PandaBoard ES support proper hot-plug detection of both the HDMI and DVI connectors (and neither are there any obvious warnings in the manual about hotplugging being ill-advised) I don’t know enough about the hardware side of things to know whether it’s possible now that there is a purpose to doing so to be able to make RISC OS on the Pi support EDID Implementing EDID reading on the Pi should be fairly straightforward, I’ll have a go at it sometime soon. Recently I also found some sample code for switching the GPU mode (including switching to/from TV-out I think), but I’m not sure how much work it will be to integrate that with RISC OS (lots of source code for me to dig through to find the actual core bit of code – and I think the firmware will only operate in terms of the HDMI/DVI mode numbers, i.e. we won’t be able to use arbitrary mode timings at the hardware level)
AIUI the problem there was that even if you didn’t want to use a mode list generated from EDID, it was still using the EDID reported by the monitor to restrict your mode selections. We can avoid that problem simply by making sure that EDID is completely ignored if the user loads an MDF. The GPU drivers will still need to make sure the mode timings fit various constraints to avoid things going badly wrong, but those constraints shouldn’t be any worse than the ones that the GraphicsV vet mode call currently uses. |
|
DisplayManager already responds appropriately once ScreenModes changes the list of presented modes – so it’s doing the minimum bits. I can vet modes on the fly I guess as they are created – run each through GraphicsV 7, and only link modes which pass. That would probably suffice quite easily there – we generate the mode, then vet it before we add it to the linked list of modes. That’s turned what I thought might be a challenge into something straightforward. I presume any driver which can’t vet the modes will just pass everything – or again return all registers preserved? So R0=0 or R0 on exit = R0 on entry goes on the modes list, all other stuff bad? The ‘mode tweaking’ is why I’m having to go code-hunting at the minute. MDF creation was always a bit of a black art to me (I’ve wandered into this project through personal need to see it completed in order to best utilise my Pandaboard, though the bounty would justify buying a decent monitor which supports HDMI and has a more complete EDID to play with :-), plus pay for recent RISC OS expenditure.) Currently I’m relying upon presets from DecodeEDID’s source; but to move on we’ll need to be able to actually generate those values ourselves in a sane way. CMOS may be a later option – until I’m confident we know what the gotchas are. If we include it, and find everyone is using EDID by default, then I totally agree with the above. Until EDID has seen a little mileage (to find out how bad some monitors behave) then I’d rather it were Boot sequence-based so if it goes wrong fixing it is more trivial. |
|
Worked through Steve’s code – unfortunately didn’t find what I need there. What I needed is the VESA Coordinated Video Timings Standard document – which fortunately I have now managed to find :-). Now I need a magic chunk of free time :-). |
|
Did you find www.fl-eng.com/_lib/doc/vesa.xls too? |
|
Thanks for the EDID checkin :-D. Exhaustion is probably going to prevent further EDID progress today, but if I get chance in the next couple of nights going to work through the video timings standard document which should get us more reliable modes for the established timings, and the ability to add the remaining missing modes. |
|
No problem! The mode_valid() function in ScreenModes contains an example of how to call GraphicsV 7 correctly (GraphicsV_VetMode). Basically if R4 is zero on exit and R0 is non-zero then the mode isn’t supported. Although if you wanted you could probably drop the check on R4 (which checks whether the call was claimed) because GraphicsV 7 should now be mandatory in all drivers (previously the NVidia driver was getting by without an implementation!) mode_valid() also does a few other noteworthy things:
Since the choice of pixel format may affect whether a certain set of mode timings are acceptable or not (e.g. a memory bandwidth limitation would obviously be related to both the pixel rate and the BPP), you’ll need to write the code so that if the first vet attempt fails it will try all the other pixel formats supported by the driver before giving up and saying that the mode isn’t supported. The easiest way of doing this is probably using some logic similar to what’s used in service_enumeratescreenmodes(), where it calls GraphicsV_PixelFormats to get a list of supported pixel formats (which is just an array of PixelFormat structs). If GraphicsV_PixelFormats isn’t implemented then it falls back to using a hardcoded list of formats which corresponds to the list of formats supported by the old GraphicsV_DisplayFeatures call. |
|
Cheers Jeffrey. Just working my way through the timings stuff last night. If we can generate our own timings things will be much better. Looking through the timings standard: the monitor specifies it’s preferred frame rate. For modern LCDs – should the user be offered a choice of frame rate at all? It should make no difference to the user and we should probably Just offer what the display prefers where possible. That way the user just picks x, y, colours and we do the rest. Also – i am getting the impression we should be using ‘reduced blanking’ on LCDs wherever possible – this will provide larger resolutions on the Pi and Panda. If we know the display supports them, should we again default to reduced blanking? |
|
Also – is EDID in last night’s ROM build or will it be tonight’s? |
|
Yeah, when generating modes from EDID I’d say it’s probably best to stick to one framerate per X/Y where possible. However since colour depth will affect whether a mode is available (e.g. due to memory bandwidth limits) you’ll have to design the code so that it is still capable of generating multiple sets of mode timings for each X, Y combination. E.g. start off generating a set of mode timings that match the monitor’s preferred framerate, and see how many of the driver’s supported colour depths will work with those timings. Then if there are any colour depths which aren’t supported then use some clever algorithm to tweak the mode timings (different pixel rate, different porch/sync values, etc.) until you end up with a N different sets of mode timings which together will provide support for all colour depths at a given X/Y resolution. The only problem is that because the drivers don’t provide any feedback for why a mode is being rejected it might not be straightforward to work out what set of mode timings your code should try next. So as I said a few posts above, if you’ve got any thoughts on what you’d like to see (e.g. drivers describing their limits, and/or GraphicsV 7 returning hints on why modes have been rejected) then let me know and I’ll try to make it happen!
Yeah, using reduced blanking will be a big help on some machines. Even if you aren’t trying to go for high resolutions, reduced blanking may be the only option – e.g. early OMAP3 revisions had some tight limits on how large some of the sync/porch values could be, which meant that regular VGA timings wouldn’t work. I’m not sure whether the RiscPC likes reduced blanking timings – so you might also have to take into account situations where the monitor supports reduced blanking but the GPU doesn’t. Although it’s a bit academic in the case of the RiscPC because there’s no EDID support.
It should be in the latest ROM. |
|
Probably not. As far as I recall, frame rate was a way to trade off resolution for speed. Fewer frames per second means you can display more pixels for the same overall “clock” amount. In this day and age – and bearing in mind that RISC OS is used not only with LCDs but also TVs and older analogue devices via convertors, it might be best to “negotiate” between what the hardware is capable of and what the monitor can offer. So – we should probably offer a colour depth and a size, and let the actual frame rate be determined behind the scenes? To test this – I have just stared at Explorer, my “earth” backdrop, this web page, and part of my recording of The Troll Hunter on the PC running 1440×900 at 60Hz and at 75Hz to an LG L192WS monitor. If I look closely at the detail of the text, especially in areas of stark and rapid contrast changes, I can see some slight jitter at 75Hz – there’s probably a limit to how much you can pump through the VGA wires. If I sit back and stop looking for flaws, I can’t say I see any difference at all.
I would suggest “not unless necessary”. If the hardware can run with normal timings, this might be better. If, however, the reduced timings are necessary for it to work. Well… it’s “a no brainer”. ;-)
On the other hand – how many colour depths are redundant these days? Does anybody still use 4 colour modes? Actually, did anybody ever use it to begin with, post-Beeb?
That might be useful, yes. While a driver has specific insider knowledge of the video system in question, vetting modes is currently a pass-fail. If a driver could expose more info on its capabilities, other software may be able to make intelligent choices, instead of a “will this work? will this work? will this work?”.
Fingers crossed the xM will be less restrictive. ;-) |
|
Even if there’s no modern software which uses low colour modes, they’re still worth keeping around for use with emulators, so that the emulator doesn’t have to worry about translating the data. early OMAP3 revisions had some tight limits on how large some of the sync/porch values could be, which meant that regular VGA timings wouldn’t work. Yeah, they’d fixed that by the time the xM was released. |
|
Rick: why ‘not unless necessary’? Bear in mind reduced timings != interlacing. The idea is to reduce the blanking times. As far as I can tell on LCD-type displays reduced blanking offers reduced bandwidth for the same output without a downside (as far as I can see). Obviously we haven’t had timing data to routinely do this previously; but we should now have the option (I think). I’d be interested if there are possible downsides and what they might be. |
|
I am just thinking of wider compatibility. If standard timings will suffice, why not use them; reverting to the reduced timings when this is necessary? I’m not saying “no not use reduced timings”, I am just saying if there is a choice between reduced and normal, perhaps normal may be a better option? |
|
I’ve just received an allocated filetype for EDID data (&F76). We’ll use that if people want to save EDID data and load it instead of a textual MDF (this is most likely to happen if you’re using anything which will interfere with reading EDID such as a KVM, or you want to use hardware which doesn’t support EDID, such as a RISC PC. EDID data should be superior to MDFs if the timing stuff works – currently looking at that). Would anyone with some graphical talent fancy making icons for it? I know Richard Hallas produced the Pi icon set – if he’s browsing the forums and still keeps his hand in on icons then ‘pretty please’; but if not and anyone else can oblige it’s likely to be better than my efforts :-). |
|
BQ. Would anyone with some graphical talent fancy making icons for it? I know Richard Hallas produced the Pi icon set – if he’s browsing the forums and still keeps his hand in on icons then ‘pretty please’; but if not and anyone else can oblige it’s likely to be better than my efforts :-) Existing “Display” icon with an arrow down out of the screen ? Indicating something coming FROM the display. |
|
Argh! Have just spend rather too many hours staring at the GTF calculations. State of the Union: GTF calculations produce timings that seem to work on the Pi. Bear in mind I’m aware that the Pi has a fudge factor for screen modes so they may be wrong but the Pi is fixing them to be right. The GTF-calculated timings don’t seem to work on the Panda. Moreso, I think the horizontal timings all look too high to me compared to the ‘inbuilt’ set (or those seen in the MDFs). On the other hand, using the non-GTF the preferred mode on the Panda is rendering beautifully: (2048 * 1152). Can’t get 2048X1152 to work on the Pi (despite there possibly being a reduced blanking mode which may be usable) – the picture goes off-screen on the right. The other downside is that the Panda seems to dislike reading the EDID as it stands – a non-zero IIC op code is being returned in r0 by the looks of things. Will need to investigate that further. Not quite sure what I can do with the GTF formulae at this stage – I think it may need other eyes on it. It’s not a show-stopper per se (we can use the inbuilts for ‘established timings’ but it will prevent a subset of modes being made available where we only know X, Y and the refresh rate. I may have to code-drop the code to ROOL with GTF on a switch; to me the code looks like it’s following the GTF timings :-(. Anyone got any ideas about how RISC OS uses the DPMS_Support, LCD_support and output_format flags in the MonitorDescription (and hence presumably in the MDF)? All these post-date Application Note 254. I can tell it if the signal is analogue or digital from EDID; DPMS I’m given three flags for active off, standby and suspend support. I can’t (interestingly) tell if it’s a CRT or TFT from the EDID block so LCD_support is an interesting flag! |
|
The fudge factor is that it completely ignores the timings and only looks at the X, Y and colour depth :-)
I’ve just checked, and the timing limits on OMAP3 & OMAP4 are as follows:
Note that there’s no hardware support for a border region; instead the video driver just adds the border times onto the porch times
Reading EDID seems to be working on my BeagleBoard, so I’m guessing it must be a problem in the OMAP4 HAL. But from looking at the source I can’t see anything obvious.
dpms_state: This directly specifies the DPMS value that’s passed to GraphicsV 4 lcd_support: I’m 99% certain this is a property that was added during Stork development (the cancelled IOMD laptop). I reckon we should get rid of it, as it’s doing some very IOMD-specific stuff which should really be done by the video driver instead. output_format: I’m 99% certain this is an STB-specific thing, it was originally added for a project using Chrontel hardware (see here). I’d say keep the code in there on the offchance we need something like it in the future, but when reading EDID just initialise output_format to -1. That’ll stop the code from adding the value to the VIDC control list and confusing drivers. I think you can also safely ignore external_clock (more STB-specific stuff, just set it to -1) and the interlaced flag (another STB-specific thing; it basically enabled some page table hacks that would allow two separate framestores (one per field) to be seen as one progressive framestore by the OS. See here for further info if you’re interested). |
|
Jeffrey: That would explain why I’ve now got a near-complete set of modes on the Pi (barring a useable 2048*1152 due to odd windows which is clearly the Pi’s fault!) I’ve added standard timings support despite the numbers not being right. I’ve tried being sensible and providing a commented, careful set of calculations for GTF. If I can’t spot it soon I’m going to have to import the calculations from DecodeEDID which are more likely to work but aren’t human-readable. EDID reading on OMAP 4 works (I’ve used it to extract EDID data previously using a BASIC program) – just that I think R0 is returning an unexpected value AFAICS (I’ve flagged anything where R0 != 0). I don’t know what that value is – just that it’s non-zero. Will tidy the error trap up so I get a proper return code but was sufficiently bothered by GTF not working that this was a side issue really. I’m not checking stuff in directly, so I’ve sent a code drop to ROOL; if you want/need a copy I’m more than happy for you to have it. |
|
If you do go down that route, you could leave your version in there, but ifdef it out with a suitable comment and someone might be able to spot the bug and switch to your more maintainable version.
We’re likely to just review the code at the moment, rather than commit it (unless there’s a strong case for committing at this point). So for the sake of keeping things moving, feel free to send work-in-progress versions to whoever needs them. |