SpriteExtend issues?
Garry (87) 184 posts |
Hello,
On that running, I get a dialog which declares: SpriteExtend: Bad colour translation table. I feel, I’m making progress, but at the same time, somewhat stumped. I don’t know anything about the WIMP, so a code example would go a long way here to illustrate what I’m doing wrong. I cannot find a jot of documentation on how to do this, so I’m just randomly hitting keys at this point. I’m using OSLib, if that’s an issue, I’ll change to plain SWIs if need be. Cheers Garry |
Jeffrey Lee (213) 6048 posts |
Most OS_SpriteOp sprite plot calls require you to supply a translation table, which is used to convert the pixels between source and destination palettes. The basic procedure for correctly plotting a sprite is:
If you’re going to plot the same sprite lots of times then it’s recommended to cache the translation table. You should be able to keep it until the next mode change, or until the screen (or sprite) palette changes. It’s also possible to share translation tables between sprites, if (a) they both have the same mode word and (b) they both have the same palettes. Note that both ColourTrans_GenerateTable and OS_SpriteOp have a bunch of flags which control how the table is generated and used. You’ll generally want bit 4 of the ColourTrans_GenerateTable flags set, and bits 4 and 5 of the OS_SpriteOp flags. |
Garry (87) 184 posts |
Thank you Jeffrey, I really appreciate the help. I am now attempting to generate a colourtrans table, and it runs without error, returning 8, as the length. I then allocate that memory, then run again, so I’m doing this:
} Then I plot like this: But then I get ‘Internal error: Abort on data transfer’. Is it possible that OSLib does not have the constants required for the new sprite plotting or something? Many thanks again. Garry |
Jeffrey Lee (213) 6048 posts |
I can see two problems. The first is that you’re confused over how to specify the flags in the first argument to The second problem is that you’re mixing Wimp_SpriteOp and OS_SpriteOp calls. This is a problem because only the Wimp knows how to find sprites which are located within its sprite pool – OS_SpriteOp knows absolutely nothing about it. So if your sprite is located in the Wimp sprite pool, you should only ever use Wimp_SpriteOp to interact with it (i.e. Also remember that the Wimp sprite pool is rather volatile – when you call Wimp_Poll another task could add or remove some sprites from the pool. So you shouldn’t cache any pointers to sprites which are in the Wimp sprite pool (or at least you shouldn’t cache the pointers across calls to Wimp_Poll). In your case I think the only time where you’d actually need to know the pointer to the sprite is when you’re generating the translation table. And because the Wimp sprite pool is shared by all tasks, it’s generally considered bad form to put arbitrary sprites into it – the pool should only really be used for sprites which have had their name reserved (application sprites, file type icons, etc). If your program needs to use arbitrary sprites of its own creation then it should create and manage its own sprite pool(s). But since you’re just getting started I’ll let you off for now ;-) |
Garry (87) 184 posts |
Thanks for the help Jeffrey, my plotting still does not work, but I do feel I am at least beginning to understand some of the concepts. I have reworked my code to only use ‘wimp’ functions. I appreciate it’s not cool to do this for any old sprite, but I’d like to get plotting working first and then I’ll see about making things the way they should be. My code is now this:
} I’m not using ColourTrans now, and now I get the below: The funny thing is, if I switch into a 64k mode from my usual 16m colour mode, I get this: Which almost looks better than the 16m mode, at least I can see the fade of the circle I’m trying to render, albeit in the wrong colours. So I’m not quite sure about where to go with this. Does anything here strike you as wildly wrong? The window is !PrivateEye showing the sprite which appears as square in my app… Thanks |
Garry (87) 184 posts |
Hmm, I was just about to upload some screengrabs to illustrate my point, but !FTPc did not work, so I decided to restart. On restarting my Pi now will not boot RISC OS (it’ll boot Plan 9 on a different SD card). On putting the SD card into my PC it cannot see the FAT partition any more, so I assume it’s hosed. I’ll need to find some time to fix this before continuing with my work. Still, I’d love to hear any ideas on how I can render an alpha blended image… UPDATE: Recovered from backup, I expected it to take ages, but I’m back using RISC OS! |
Jeffrey Lee (213) 6048 posts |
I think there are two problems you’re seeing here:
|
Garry (87) 184 posts |
Hi Jeffrey, If it matters, I’m now on RISC OS 5.21 (26-Feb-14), since I recovered from my disk woes. Before, I was on the 8th April build, if memory serves, although it appears to behave the same in this respect. For some reason, since I recovered from my disk issue, I can no longer select a 64k mode. I’m using the same ‘HP LP2565’ monitor, in 1920×1080. However, to be honest, if my program only works in 16m colours, I’m OK with that. In this day and age of cheap and capable RISC OS machines (in no small part, thanks to you), 16m colours is easy to come by. |
Jeffrey Lee (213) 6048 posts |
It doesn’t look like I’ll ever find the time (or remember) to write an example in C using OSLib and the Wimp, so here’s the next best thing I have available – a bona-fide RISC OS 5 format sprite with an alpha channel and a bare-bones example of how to render it using BASIC: http://www.phlamethrower.co.uk/misc2/alpha.zip
The OS version does matter; to date none of the Pi RC builds have support for the new sprite formats (or the 64K screen mode). You’ll have to use one of the nightly development builds. The 8th of April build should be fine, if you still have it. |
Garry (87) 184 posts |
Jeffrey, this is great. I put your sprite into the wimp sprite pool, and my program renders it in all it’s alpha blended glory. So now I will see about tidying everything up to use it’s own sprite pool and continue with the rest of my application. Thank you for your wonderful help, it’s really fascinating to see RISC OS render such images, opens up a lot of potential. Jeffrey Lee, you are a good man. Cheers Garry |
Steve Revill (20) 1361 posts |
On a vaguely related note, how would I go about plotting a sprite where each pixel is effectively a mapping of how much to brighten/darken the pixel it’s plotted onto? E.g. assuming this sprite is 8bpp, if a bit has value 0, it means “darken a lot”, 127 means don’t change and 255 means “brighten a lot”. I assume I’m going to just have to implement this with a custom plotting routine – or is this something that can be bodged using clever alpha sprite plotting? |
Steve Revill (20) 1361 posts |
Thinking about it, I could maybe have a “darken” pixel as black, with the alpha channel controlling how much to darken that pixel. A “lighten” pixel would be white (again with the alpha channel controlling how much). And pixels that don’t affect the destination would be fully transparent in the alpha channel (or vice verse for fully visible). I think I just answered my own question! :) I should probably just RTFM, but I assume I can have an 8bpp paletted sprite with alpha? |
Jeffrey Lee (213) 6048 posts |
That would work, although the lightening operation would be a blend towards white, rather than applying a >1 multiplier to the RGB. I guess it depends whether that’s what you’re after or not! The other option I can think of would be to have two sprites (one to brighten, one to darken) and use EOR plotting to invert the screen before (and after) plotting the brightening sprite. If my maths is right this would result in the brightening sprite effectively performing a blend of
Yes, the RISC OS Select alpha mask format can be used for that. Each pixel will have its own alpha value, independent of the palette index. |
Steve Revill (20) 1361 posts |
Interesting thought. I’d need to switch to banked screens but I’ve been getting the feeling that’d be needed anyway for what I’m mulling over. |
Jeffrey Lee (213) 6048 posts |
With the amount of reading & writing of screen memory needed for that approach, you’ll probably find you can get better performance by creating a temporary sprite in main memory each time you want to plot your brightening/darkening sprite. Which wouldn’t be as bad as it sounds, considering that you’ll be able to optimise the last EOR rectangle plot into an EOR sprite plot:
Depending on the other things you need to do, this may alleviate the need to use multiple screen banks. Or you could even use main RAM for your second screen bank and just use OS_SpriteOp to blit it to VRAM once you’re done. |
Jeffrey Lee (213) 6048 posts |
Scratch that – I was thinking there’d be a “dest = NOT src” plot action, but there isn’t. But I think using an offscreen buffer will still end up being faster than reading & writing directly to screen memory, at least for the Iyonix. And if you wanted to it wouldn’t be too hard to put together your own plotter to copy the inverted sprite back to the screen – it’ll just be a simple rectangle of pixels. |
nemo (145) 2546 posts |
You did, that is the way to do it.
That IS a lightening operation. What you’re describing is a (clipping) saturation process. EOR is a red herring, Steve’s approach is correct… except for gamma correction of course (everyone mimes “broken record”).
A single alpha blend onto the screen should not require any more screen IO than using a separate buffer would, though a separate buffer then needs its own IO. Alpha blending isn’t doing a pixel at a time is it?! |
Jeffrey Lee (213) 6048 posts |
I was still talking in the context of doing a saturation process with the EOR plotting. Although I guess it’s possible a separate buffer would still be faster for the single sprite op case.
For complex plots screen reads/writes are performed one word at a time, which for 32bpp would obviously end up being one pixel at a time. I think this is partly due to limitations in how registers are allocated (it’s easier to guarantee the system will work if it just sticks to one register). Plus the code subscribes to the mantra of “screen memory is slow” and so it avoids redundant reads/writes wherever possible, so isn’t very easy to get working with LDM/STM. |
nemo (145) 2546 posts |
I have in the past gone to extreme lengths to ensure screen writes took place in aligned two-word (or four-word) bundles (and from instructions on four-word boundaries!), but modern core architectures don’t really benefit from that kind of ARM OCD ASM IYSWIM. |
Steve Revill (20) 1361 posts |
Try telling that to Ben… |
nemo (145) 2546 posts |
We all need a hobby. Mine is failing to learn Japanese. |