AcornC/C++ !SetPaths broken
Pages: 1 2
nemo (145) 2546 posts |
The Castle additions – SetUp and !Msg – both load things into the application slot without using *WimpSlot first, and hence fail if Next is set to 0K. I spotted this because I thought 640K was quite a lot just to display a one line message, so tried the obvious. And the rigmarole of getting the developer stuff onto an emulated machine was very nearly More Trouble Than It’s Worth. A bzipped ISO? Pain in the ARSE. Firstly bzip2 is almost completely unusable on a HostFS-like filing system because its file extension grokking and inability to unambiguously specify an output file is incompatible with HostFS’s extension/filetype mapping. And an ISO is a rather difficult thing to deal with too. Got bored with the impenetrable “BadMode” error when trying to use CDFaker so used VirtualCD on Windows instead. Why on earth isn’t the whole lot a ZIP instead? Too easy?! |
nemo (145) 2546 posts |
And thanks to the AcornC/C++ tools I have another directory of washed-out icons I’m going to have to gamma-correct. It’s clear that no one who designs these icons or writes these programs has a calibrated monitor… and I find that very worrying indeed. You do all know that everything in RISC OS gets the colour wrong unless you have a calibrated monitor, don’t you? Grumble, grumble. |
Rick Murray (539) 13840 posts |
Isn’t that just generally true, like, everywhere? Even though Windows devices use sRGB, you can’t be certain the reproduction is correct without some form of actual calibration. For example, my €3 LCD monitor has options for setting “colour temperature” which can dramatically alter how things look. I tried to calibrate the monitor built into my eeePC and things just looked awful so I reverted to uncalibrated. At work, there is a constant disparity over medium-sized gloves, is the box green or blue (it is sort-of aqua, so it looks more green to me but I see how some might perceive it as blue). The point is – we all perceive colours differently. Most domestic systems will have an uncalibrated setup that will suffice for most tasks, but even with calibration, there’s no sort of guarantee that things will look to you like they look to me. ;-) Does RISC OS support sRGB? BTW, just for a laugh and to see how far off supposedly compliant equipment is (all my stuff “claims” to be sRGB), take a photo. Any photo, but something with strong colours like a summer sunset would work. Scan it. Print it. Now put the photo on the left of the monitor, the scan on the screen, the printout on the right. On most domestic systems, the lack of calibration will be inordinately obvious. |
nemo (145) 2546 posts |
No. I’m not talking about slight differences in colour temperature or CRT/LCD/plasma primaries (nor the inherent limitations and essentially unsolvable differences between additive and subtractive colour systems) I’m talking about a basic mathematical error caused by either ignorance or carelessness (or the mistaken belief that it didn’t really matter) that makes ChangeFSI, ColourTrans and FontManager get things dreadfully wrong, and more capable software such as PhotoDesk, Vantage and Composition to be consciously designed to operate with a linearly calibrated monitor. The solution isn’t “calibration” (as in I now know precisely how red it is) but gamma correction – calibrating the display to a linear response to mitigate the bug/misdesign in these crucial systems. The problem is hugely embarrassing and is simply described: If you create a sprite filled with “50% grey”, ie byte values of 127 or 128 (whether 24bit RGB or paletted) and then ask ChangeFSI to dither it into a 2 colour mode… or ask ColourTrans for a 50% grey in a 2 colour mode… what will you get? The answer is a checkerboard. Half the pixels are white, half are black, and assuming they’re sufficiently small they mix optically in the eye to produce (roughly) a 50% brightness. That seems reasonable doesn’t it? 50% grey becoming 50% brightness? Unfortunately, that’s not what monitors actually do, out of the box. Put 50% grey in video memory (127 or 128) and what you get out of the VGA connector is approximately 0.5 volts, where black is nearly zero and white is 1 volt. So that’s 50% again. However, when a monitor gets a 0.5V signal (or the digital equivalent) it does not display 50% brightness. Typically it will display something around 22% brightness. Think about that for a moment. That means that every pixel that has a byte value of 128 is intended to represent 22% brightness. Yet ChangeFSI and ColourTrans think that 128 means 50% brightness. FontManager makes the same mistake in reverse – when antialiasing geometry that covers 50% of a pixel, it approximates the coverage by using a byte value of 128… which is a brightness of 22%. D’oh. There’s little point speculating who made these mistakes or why, but one can describe why monitors work like this. Human visual perception, much like hearing, is roughly logarithmic – you can clearly hear a whisper in a quiet room, but not next to a racing car (though the whisper is the same). Similarly the eye is more sensitive to small changes in brightness among dark shades than bright ones. Given the necessary quantisation of an 8bit per component display system, how do you “space” those shades to minimise the apparent quantisation – how do you avoid obvious bands? The answer is to have lots of dark shades and fewer bright shades. This is why display systems have a non-linear response – because we have non-linear eyes and we want to mitigate quantisation artefacts. Sadly the author(s) of ColourTrans, ChangeFSI and FontManager didn’t know, or didn’t care. Earlier I mentioned 2 colour modes but that was merely to easily demonstrate the error – when the code is carrying out every colour calculation believing that 22% brightness is actually 50% brightness, everything is affected – colour matching, quantisation, dithering, rescaling, antialiasing, transparency, it’s hopeless. The solution is to calibrate the system to be linear, so that 50% in video memory produces 50% brightness, thereby allowing all that dodgy code to do what it was intended to do. The trouble is that when you do that, all those existent icons change radically. The Cerilica !Monitor calibrator mitigated this somewhat (in the RO3 days) by reverse-correcting the 16 colour desktop palette, so that 16 colour icons still looked as intended. Unfortunately that’s not practical for 256 colour sprites, and becomes problematic when you realise that you really don’t want to reverse-correct everything as that would include the very output that you’ve been trying to force to be right! So the only alternative was the Theme Protocol – to allow applications and users to choose not only OS-themed icons, but also (orthogonally) icons designed for a calibrated monitor. The Monitor module includes an API call to read (and set of course) the gamma response of the system. I’m pretty sure that’s the only API to do so on RISC OS. PNGs do support a gamma tag, so they can be displayed unambiguously. One could easily define a TIFF tag too, if there isn’t one already. However, Sprites have no such concept, and though one could add a gamma tag to a sprite file (using the post-header structured data method for which there is much precedent) that can only really mark the intended gamma of the whole file, not individual icons… and in any case such functionality does not yet exist. In more capable OSes all this difficulty is “handled” by (hidden within, more like) ICC profile support. Profiles necessarily encapsulate the intended gamma response of a colourspace and whether an Input, Output or DeviceLink profile it can gamma correct on the fly as well as make subtle changes to the primaries (and convert between additive and subtractive spaces of course). But ICC profiles are HUGE, difficult to produce, computationally expensive to apply and really do far too much when all we’re trying to do is get around the fact that certain authors all that time ago missed one crucial mathematical detail. So, is this “generally true everywhere”? No. Absolutely not. Not in this Century. No. |
Jeffrey Lee (213) 6048 posts |
Are you aware of PaletteV 9? This allows you to set the gamma table the video hardware uses, so you can make your byte value of 128 correspond to 50% brightness. From looking at the kernel sources I can see the wiki is a bit out of date (there’s now calls to read the gamma tables), but I’m still not sure how well all of this is integrated into the OS/programs (e.g. whether !Thump reads the current gamma table and uses that as a basis when it fades the screen in/out for slideshows). |
nemo (145) 2546 posts |
Indeed, and that is of course what the Monitor module and Cerilica !Monitor application use. Incidentally, the Monitor module also has an API for setting the calibration and applying that calibration to any modification done via PaletteV, and also an API for automatic screen fading and tinting. So programs that expect that table to be linear (and hence for the display system to have a gamma of around 2.2) can continue to operate (with a corrected result) when manipulating it accordingly. There’s also a hi-def API which was included in anticipation of hardware with better than 8bit per component mapping capabilities, as you can’t create a non-linear mapping from 8bit to 8bit without reducing the number of distinct shades. It would be good to know if any contemporary systems could make use of that added precision. |
Raik (463) 2061 posts |
The Cerilica !Monitor application I use for the Pandora to calibate the LCD. It works fine. |
Rick Murray (539) 13840 posts |
Ah, yes. I have !Gamma on my RiscPC. Needed to calibrate it to my monitor(s) and I could never get a red that made the blue look correct (and vice versa)!
The question is – who is at fault? The software for being simplistic, or the monitor for working obscurely? I recall from photography at school that the ge-grey swatches were non-obvious as to their sequence (50 grey seemed darker than logic would imply, IIRC). Is it a log law or somesuch?
:-) Go to a supermarket. Look at the TVs. They have a dozen (or more) side by side. Notice how no two are alike. You’d have thought, in this digital age, two tellies would display the same picture from the same input. So we’re back to the question: Is it the software/data that is wrong, or the display device?
I think, without ugly/weird dithering, that a checkerboard is about the best you can hope to get on a 2 colour display. Surely the more colours are available, the better the results will be? When I asked my (PC) graphics program to convert R127,G127,B127 to monochrome, it gave me the checkerboard. When I told it to use “dispersed”, it did a better job, offering up:
I remember pulling gamma data back in the day. I think it was a hack to poke around in the VIDC or its workspace. I remember I screwed up something and everything went black. Oops!
Computer: eeePC 901 with the non-shiny display panel. As mentioned: a photo, on screen, printed. Photo natural. On-screen, a bit cartoony. Printed, washed out and darker than it should be, lacking contrast. Same after artificially pushing it. Bloody hard to get this printer to print what I see on-screen. Scanner doesn’t mention correction that I can see. So, I accept that, in this century, capable OSs apply their own tacit conversions between colourspaces and devices (i.e. the scanner reads in RGB and the printer uses CYMK). However, on a domestic out-of-the-box setup, it gets it wrong. It isn’t a crappy Brother printer, I have on my older PC (someplace) a textfile listing a set of “corrections” to get acceptable output from my Lexmark z23; and on the RiscPC I used to have to tweak photos to get it to print well on the Epson Stylus Color 640 (now that was a cartoony printer). I used to do the corrections on a PC because the old software I had on my RiscPC was tragically slow. I can tell you two things:
Where might I find the !Monitor application? I went to the Cerilica website and … it’s a Japanese dating site. One of the translations is, and I quote: “It is a site famous as the site of the first etch looking teen does not have to encounter resistance in the mail. Although not suitable for mature ladies, I would love a young child.”. I would like to believe that this is a limitation of Google’s auto-translation, although “若い子” is… yeah… jeez… Hentai!!!! [edit: the hentai link is to a wikipedia article, but the attached photo (was this snuck in?) is very NSFW] |
nemo (145) 2546 posts |
Ignorance is no defence. It’s the fault of the software author. To be fair, it’s a difficult subject, but it’s not like the gamma characteristics of video standards was a secret.
It’s a power. Monitors are approximately 2.2 gamma, so 0.5 ^ 2.2 = 0.2176
All of which will be shipped from the factory in an artificial “vivid colour” mode which is designed purely to attract the eye in the brightly-lit shop. The first thing you do when you get it home is switch all that crap off.
No you misunderstand. A checkerboard is a great dither pattern… if you’re trying to match a grey value of 186. But if you were trying to match 128 it’s dead wrong (unless you have adjusted your system to display 128 as 50% brightness). The 2 colour demonstration is merely a manifestation of the incorrect maths employed by ColourTrans and ChangeFSI, and the same mistake in reverse in FontManager (don’t get me started on Artworks, which is even worse). The problem isn’t that there aren’t many colours, it’s that the maths is treating 22% as though it is 50%, and that manifests itself in all colour depths. I did try to explain that the 2 colour dithering example was simply a convenient thought experiment but I feared it might distract from the underlying cause. Forget dithering, lets consider ChangeFSI rescaling a 24bit image (to a 24bit destination). Let’s imaging ChangeFSI is scaling down by a factor of 2, so a square of four input pixels becomes one output pixel. If all four of those pixels are 50% grey then the output is (0.5+0.5+0.5+0.5)/4 = 50% and all is well. However, if one is red, one is green, one is blue and one is white, then the resulting components are: R = (1+0+0+1)/4 = 50% ie 50% grey. Sadly this is only correct if the display has a gamma response of 1.0. It’s usually 2.2, so to achieve the intended 50% brightness those calculations should have resulted in 73%.
I don’t doubt it, but does your program have the slightest idea of your calibration? Having just performed the same operation in Photoshop configured with AdobeRGB and a dithered conversion to monochrome I get this: http://twitpic.com/au3tl7/full – hopefully your browser won’t mangle it too much and if you squint you should get very similar shades between the dithered and solid halves. Now note how much black there is in the dithered half… in fact I’d expect about 78% black pixels to match the 50% grey! As for your printer, yes, if you haven’t calibrated your display and your printer – if you don’t have the correct ICC profiles attached to those two devices – then you can’t expect to get useful results from the combination of the two! I once had a customer come to me to complain that he had done a print job through Vantage and it had come out with smudges and marks in what should have been an area of solid colour. I loaded the file he had brought onto my machine and it popped up on screen… with the same smudges and marks he saw on the output. “Have you calibrated your monitor?” I asked. “No, I didn’t bother” came the reply. In that case he had an incorrect black level (brightness) on his monitor which was making all dark colours look the same (when they really weren’t). At least LCDs are harder to maladjust than CRTs! Native resolutions vary from system to system, so point sizes cannot be expected to be the same. Also if you have Win7 (or Vista, but you wouldn’t be using Vista would you?!) whether you want it to or not, Windows Colour System is doing colour correction for you at all times. With XP and earlier it’s up to individual application to Get It Right… and as you point out, many don’t.
What you do in the privacy of your home is your own concern. :-p Yes, sadly cerilica.com is long gone. I sometimes consider grabbing one of the alternatives, but then come to my senses. I think ROOL is a better place for all the low-level stuff. |
nemo (145) 2546 posts |
Having grumbled about the gamma correction of icons in the AcornC/C++ bundle (and having laboriously converted them all) I realise that there is an opportunity for a more intelligent solution. In each case I moved the original sprites to a Theme.Default subdirectory and added the converted versions as other themes. The majority of those icons were paletted 256 colour sprites, so it is only the palette that needs to be corrected (not something ChangeFSI can do so I knocked up a little utility to automate it). Now storing all those paletted icons again in RO3_Linear and RO4_Linear theme directories when only their palette had changed is pretty wasteful, which is something the Theme Protocol sought to avoid. A more intelligent version/alternative to IconSprites could instead apply gamma correction to the palettes (and at a push, sprites) in the application directory or Theme.Default directory, and hence avoid the duplication. Obviously where semantic elements change between OSes, appropriately themed icons must be supplied – directory colour being an example. But the ability to avoid defining gamma response as a theme choice would be advantageous. In fact, any theme not ending “Linear” should probably be considered for such processing. |
Rick Murray (539) 13840 posts |
Well, in that case surely it should be either the monitor or the graphics card that applies correction, not the software? Or are we in a situation where “the gamma is 2.2” is written in stone and if some future display technology happens to have correct responsiveness, it will need to be artificially borked to work with “this is how we’ve always done it”? I would not imagine LCD panels have the same characteristics as CRTs, yet I can plug in one and go with no changes to anything else. Is this natural or is the LCD faking the behaviour expected of a CRT? Certainly the latter case is the simplest to handle, but a pile of odd things doesn’t make a right thing.
Must not be terribly discerning clients then. The only time I expect to see pictures like that is when watching animé! Yes, I agree the best thing to do is turn the rubbish off, but it may be the rubbish that dissuades me from making the purchase. Well, you know, in some alternate reality where I can afford to drop half a grand on a telly!
As I read this on my phone at work, it occurred to me… isn’t this what ColourTrans was for? If ColourTrans is fixed, would this fix a lot of the problems, or does all the software you mention do “it’s own damned thing”?
Not even remotely similar. I don’t need to squint, just take my glasses off and sit back a bit. The grey is quite a bit lighter than the dither. The video card has a setting for gamma correction (default = 1.0) and I can make it all brighter, but not any darker. Fiddling with the brightness option doesn’t do anything useful either.
Why does this stuff claim to use sRGB if it’s pretty much useless? I’m still looking for a “calibrate the printer” option. Forty-five ****ing minutes later. I got an eeePC901.icc profile. It needs a tool to get it working with Windows. [here: http://www.microsoft.com/en-us/download/details.aspx?id=12714] I think I’ll let my email to Microsoft do the talking:
Of course, I got an ack for my email so I clicked on a “Save this message” button only to see
Hmmm… So not only are colours open to wild interpretation, so are sizes of things. It’s a wonder WYSIWYG ever worked at all! ;-)
Hic! I do have some standards, y’know. If I’m going to fantasise over a young girl, she needs to be animated. :-p
Yes, this should be done at OS level so it is something that can work with everything, rather than reinventing the wheel over and over. |
Rick Murray (539) 13840 posts |
Update: The graphics properties advanced tab has a “color profile” thing hidden away. I have associated the icc profile I downloaded with the monitor. Doesn’t look much different, but if I sit back and take off my glasses and squint as well, I can almost make the grey and the dither look similar; but closer up it is very different (the grey gets lighter – is this a property of the LCD?). |
nemo (145) 2546 posts |
It isn’t for tradition, it’s the most efficient encoding to minimise visual differences between adjacent shades. Our response to light is not linear with brightness – we are more sensitive to differences in dark shades than light ones. The reason we have gamma curves (and that gamma curves are built into almost all image formats) is because we have 8bit components. Even 16bit components usually have a gamma curve applied. It is only when you get to really deep components – floats or RAW image formats for example – that you can dispense with a gamma curve without compromising image quality. I don’t advocate a 1.0 gamma because it’s “better” than 2.2. It is mandated by the fact that all the RISC OS software was written implicitly or explicitly requiring it!
Indeed, ColourTrans would need to be “fixed”, but so would FontManager, and ChangeFSI, and PhotoDesk, and Vantage, and Artworks, and every image processing or creation program on RISC OS. That’s the scale of the problem. That’s why adjusting the output curve to match the software is the key. And the quantisation problem can be partially solved by having more than 8bits per component in the output curves – as high-end colorimetric monitors do. 12 or 14 bits per component is not unusual in such monitors’ LUTs. |
Ben Avison (25) 445 posts |
While the authors of FontManager etc aren’t around to defend themselves, I feel like I should mention the obvious justification for them assuming a gamma of 1.0 – that it makes their maths a lot simpler and therefore faster. They may have been aware of gamma issues and ignored them for this reason, especially considering that the anti-aliasing shades selected are considered by most people to be “good enough” even if they’re not 100% technically correct. RISC OS’s main strengths these days are its compact size (useful for learners, but of decreasing importance for hardware cost reasons as storage capacities continue their upward march) and the fact that it’s a lot faster than alternative OSes on the same hardware. I’d hate to see the speed edge we have compromised over an idealistic point… |
nemo (145) 2546 posts |
So would using 3 as the answer to everything. Very fast, and also wrong. ;-)
Getting 78% when you thought you’d asked for 50% is quite a long way from being 100% correct, technically or otherwise. To go to the trouble of rendering a font glyph at four times the size, then count up how many subpixels have been hit in each subpixel and hence what antialising level to use, only to get it wrong by up to 56% (78 is 156% of 50) when looking up the output colour in a palette anyway is nothing to do with speed optimisation. It’s just a cockup.
Using the ‘VIDC’ LUT to linearise output actually answers all of your concerns – speed unaffected, code unchanged, everything works as expected. The Monitor module formalises that, introducing not only a calibration curve so that naïve manipulators of PaletteV have the calibration applied to their curves, but also a documented way of reading the current gamma. Plus, the Theme protocol introduces a pragmatic way of working with linear displays in addition to 2.2 gamma displays. Awareness of the issues is important. Dismissing them as “idealistic” is very far off the mark. If I wanted to be cruel I’d ask where the ICC profile support is. ;-p |
nemo (145) 2546 posts |
To get back on topic though, despite having paid real actual pennies I still haven’t been able to build a single thing with AcornC/C++, which I think is a shame. |
nemo (145) 2546 posts |
Actually, on mature reflection, if you ACTUALLY think getting the answer wrong by 56% is OK can you lend me £100? I can repay you immediately, but not necessarily 100% technically correctly, but it’ll be good enough because I know you’re not idealistic. >:-p |
Matthew Phillips (473) 721 posts |
Could you describe the problems you’re having a little more so that we can see if someone will help? I guess from the “washed out icons” comment that you have unpacked everything OK now. What have you tried to build? |
nemo (145) 2546 posts |
As described here any attempt to build any of the supplied examples produces: AcornC/C++.Examples.exampleabsasm.CApp: would not open I get “would not open” on every single one. |
Chris Gransden (337) 1207 posts |
It’s assuming that !Builder is running and a valid build environment has been selected. |
nemo (145) 2546 posts |
What the heck is !Builder? The AcornC/C++ directory contains !SetPaths and a number of directories including Examples and Tools. There’s no !Builder in Tools (nor anywhere else I can find) and I’m only trying to build the examples in the Examples directory. |
Jeffrey Lee (213) 6048 posts |
From the brief look I had at this last night, it looks like ROOL had the bright idea to write a bunch of new examples which use the same makefile fragments that the ROM build system uses. But then they forgot to include those makefile fragments in the C/C++ distribution. There should be a ‘legacy’ folder containing the original examples – although in the brief look I had through there (whether with the current C tools or older versions) I didn’t spot any makefiles, except a Dhrystone one which I had a feeling I wrote myself. |
Steve Revill (20) 1361 posts |
Yes, we did forget the shared makefiles. The next DDE release should address a lot of these issues. |
Martin Bazley (331) 379 posts |
And the next release is scheduled for…? |
nemo (145) 2546 posts |
Precisely. The USB stick is nicely printed, but I’d like to be able to use the contents. |
Pages: 1 2