YUV image formats
Pages: 1 2
Jeffrey Lee (213) 6048 posts |
So for quite a while we’ve had the extended framebuffer format specification which we’ve used as a basis for implementing new RGB image formats within the OS. The spec also covers YUV formats, but when I started work on implementing the GraphicsV overlay API I realised that the proposed way of doing things isn’t quite up to scratch.
However the main issue I want to talk about is the use of “422” and “420” NColour values. For YUV 422, hardware typically supports a few different ways of packing the components into the 32bit words. But with the “422” scheme we can only describe one of them. So if we’re introducing “unusual” NColour values, I think a better way of dealing with YUV would be to allow FOURCCs to be used, as that should allow us to describe almost any format that we’re likely to find hardware support for, and will simplify integration with video codecs. However there is the downside that the specifications of the formats aren’t always that clear, and you get some odd things like YUYV being a duplicate of YUY2. Thoughts? |
nemo (145) 2552 posts |
Trying to squeeze a never-ending series of formats into a highly limited number of bits originally intended to communicate something much simpler is doomed to eventual failure. I’d rather have an ‘other’ value and some other, richer interface (ie additional mode variables) to call instead. As for FourCCs, you are inviting instant obsolescence (much like the Unicode problem). Any software that needs to know is immediately restricted by the definitions it is aware of, and once “New!” turns up it can do nothing with it unless there is some (magically up to date) OS interface for turning the label into actual metrics… at which point the label could have been anything. I think more mode variables are required. And before fixing anything regarding colour spaces please keep in mind the implications of ICC profiles. Having implemented both v2 & v4 ICC output and devicelink profile handling I really have to stress VERY STRONGLY that “RGB” and “CMYK” are merely classes of colourspace, but don’t tell you very much beyond the dimensionality of the input. Indeed, as far as I know, my Monitor module provides the only API by which the gamma response of the screen can be discovered. That should also be a mode variable. |
Jeffrey Lee (213) 6048 posts |
So if you think a more descriptive API should be used, the question I have for you is: What information do you think the API should return? |
Jeffrey Lee (213) 6048 posts |
After looking at the capabilities of the different machines, it does seem like FOURCC’s aren’t going to be flexible enough to describe everything that’s possible. And since the hardware doesn’t accept a FOURCC directly, we’re still going to need some code which is capable of converting from FOURCC to a more detailed description, so allowing that detailed description to be used throughout most layers of the API would make things a lot easier. My current thoughts are that the detailed info would be operate similar to the following:
This means that a descriptor for a planar YUV 4:2:0 format could look like the following: Macroblock size: 2x2 Plane 0: Alignment = 8 Element size = 32 Component 0: Type: Y Size: 8 Offset: 0 Rectangle: (0,0)-(0,0) Component 1: Type: Y Size: 8 Offset: 8 Rectangle: (1,0)-(1,0) Component 2: Type: Y Size: 8 Offset: 16 Rectangle: (0,1)-(0,1) Component 3: Type: Y Size: 8 Offset: 24 Rectangle: (1,1)-(1,1) Plane 1: Alignment = 8 Element size = 8 Component 0: Type: U Size: 8 Offset: 0 Rectangle: (0,0)-(1,1) Plane 2: Alignment = 8 Element size = 8 Component 0: Type: V Size: 8 Offset: 0 Rectangle: (0,0)-(1,1) And when you allocate / map in / lock an image buffer, you would be returned three base plane addresses and three row stride values, giving you everything you need to be able to compute the addresses of individual elements within the image. The descriptor format could probably be extended to support describing the method of chroma subsampling, e.g. by specifying a centre point for each rectangle. Or maybe doing away with rectangles altogether and just specifying the centre point. But that might be a bit dangerous if there’s a format where you want some samples to be blended and others to be exact – so some extra thought is probably needed on how that should be handled. There’s also the question of how to handle unused bits within the elements. Extra component entries would probably be needed to describe the expected usage, e.g. a ‘fixed value’ component type (if the unused bits need some fixed value), a ’don’t care’ component type, and a ’don’t touch’ component type. Any thoughts? |
Steve Pampling (1551) 8172 posts |
Which I think was the general message from Nemo back in February.
I think asking Nemo nicely for a few suggestions might bring item 7041 on his todo list to the fore. It’s stuff that’s right up his alley. 1 May have lost count. |
Colin (478) 2433 posts |
I’m a total novice at video so I hope you don’t mind me asking a few questions here. I have a USB web cam that I dust off every so often and wonder how I’m going to get video off it. It outputs various frame sizes in YUY2 format – whatever that means. In your specification in the table of sprite formats in the introduced column does ‘new’ mean that they are already available or that they will be available? Maybe this topic is about introducing these new formats – it’s not clear to me. Are either sprite type 17 or sprite type 18 yuy2? The only way I can think of to display real time video in a window is to capture frames directly to a pair of sprites in ping pong fashion and display them in an application – this runs the risk of missing a frame. Any thoughts on a better way. |
Rick Murray (539) 13850 posts |
From a brief look, it seems as if YUY2 is YUV 4:2:2 which is YCbCr 4:2:2 which is sort of like how JPEG does it, but since it is video it is a packed YCbCr format in which a pair of consecutive pixels is represented by 1Y sample each but share a Cb and a Cr sample. In other words, the brightness/luminance is at a higher resolution than the colour. It’s the same for analogue TV, partly because colour was squeezed in after the fact and partly because we’re a lot more sensitive to variations in brightness than in colour.
That’s the best you’re going to get with RISC OS. My old PVR does it by having a decoder chip that outputs the video data into a format that the SoC understands, the data is directly dumped into video memory that is used by one of the low level video planes. The UI is actually an overlay that sits on top of this. Video playback on a PC often uses a similar method, writing directly to a video buffer. That’s because involving the OS and trying to do stuff in software is just too slow. If you decode to a sprite and then plot the sprite……… |
Jeffrey Lee (213) 6048 posts |
One extra thing which the descriptor will have to describe is the numeric encoding method(s) that are in use – e.g. whether the values are encoded as fixed-point (un)signed ints, normalised (un)signed ints, IEEE floats, etc. This may mean a bunch of extra attributes need to be added (or a fully extensible way of describing the encoding), since things can get a bit complicated (specifically once you start getting into the formats that are supported by 3D GPUs).
“New” means “new format being introduced by this specification”. Support for the new RGB sprite/framebuffer formats has been introduced (See the “current status” section of the spec). YUV has yet to be done.
In a roundabout way, yes. My main focus at the moment is the ability to describe YUV formats at the GraphicsV layer, so that hardware overlays (like Rick describes) can be created in those formats and used for video playback and the like. I’ve already got a version of KinoAmp which is able to make use of YUV overlays on OMAP3/4 (via the unreleased VideoOverlay module + updated OMAPVideo), but until we can settle on how the formats should be described there’s not much further I’ll be able to take it.
No. Sprite type 17 is close, but the spec suggests it uses a different component ordering. Although it’s trivial to convert between the two, it’s obviously going to be inefficient if there’s direct hardware support for the desired format (and it looks like there typically is). So the origin of this thread was the question “How can we describe different YUV format component orderings at the GraphicsV/mode variable layer?” Once we’ve answered that question we can take a look at what should be done for sprites, since one of my goals for the VideoOverlay module is for it to be able to fall back to software rendering (preferably using sprites) if there aren’t any suitable hardware overlays available. |
Colin Ferris (399) 1818 posts |
It would be interesting if USB scopes could be used with RO. |
Rick Murray (539) 13850 posts |
Stupid forum logged me out after only five minutes (while I was writing a reply). The shorter version: I have spent four days fiddling with something but since it is an undocumented Apple-origin protocol and only one person has partially decoded the file format, my own encoder simply isn’t working (with no clue as to why). Everything that does work uses WiFi so there’s just no place to stuff in WireShark to get a packet capture. Four days is enough. I’m going to drop this now. Got other stuff to do. Why am I saying this? Because I think the main impediment to whether or not stuff can work with RISC OS is simply lack of documentation. I wonder how many USB scopes support an open and defined protocol and how many are weird vendor-only things? I wish you luck… |
nemo (145) 2552 posts |
Steve very kindly said
Jeffrey’s suggestions are clearly video influenced, which is very much not my alley. Still images are something I have some experience in: JPEG, GIF, PNG, PSD, TIFF, JP2, WMPhoto (JPEG XR), DjVu, oh, and Sprites, so I can only really talk about those. I’m going to ramble for a bit… well it is the weekend. Although it isn’t fashionable to think about printing, I’m also aware that print rendering requires output formats that one would never need for display – because they’re subtractive for a start. The problem of outputting one’s proprietary video format via an output device of unknown capability, is equivalent to the problem of outputting one’s proprietary image format via a printer of unknown colourants. Aside: I have a sprite format I use which is pretty flexible for subtractive spaces (it was originally type 15 but I migrated it to 14 when RO5 redefined things. No I have not allocated it because I’ve not released anything, yet). It supports up to 255 colourants, 255 alpha channels, additive or subtractive colour spaces and optional tiling. The tiling is 64×64 and came about because I discovered that tiled affine transformations were slightly more efficient, due to cache coherency. It therefore requires up to 255 palettes, and I also support various palette types on 32K, 16M and CMYK sprites. I mention this only to generalise what “a sprite” may be. The general problem of “how do I output these samples in this colourspace into that colourspace” is usually delegated to a CMM – Colour Management. The nearest RISC OS has is ColourTrans, which is rather parochial. (It is possible to map proper colour management such as ICC devicelink profiles to a ColourTrans CLUT, at the expense of quality). If your output device doesn’t happen to have exactly the same (or mappable) format as your input format, then colour management is required. Converting YUV to RGB on this device in this ordering with these primaries and that gamma is not something an application should have to take responsibility for. The ends – how you turn this image data into CMM input samples, and then the output samples into that framebuffer data – are traditionally two separate responsibilities. Another pertinent but little-understood detail of CMMs is that actual devicelink profiles are rare… that is to say, it is unusual to have a profile that tells you how to convert colours in this image into output values for that device. Normally one would have two separate profiles, an input and an output, which are connected though a profile connection space – a third, arbitrary colourspace. This has a rather unfortunate effect when one is trying to reproduce this CMYK image on that CMYK device – the profile connection space is xyz or Lab, which is three dimensional. So your four-dimensional input is crushed to three dimensions, and then magicked back to four dimensions again. Actual CMMs go to some lengths to mitigate the resulting loss of information. This is usually referred to as black preservation. Too much information. Jeffrey said
I’m sure that must be true of the video formats Jeffrey is discussing, but it’s not true of images in general, or even sprites. Whether that is important depends on whether “most layers of the API” are actually restricted to HAL descriptions of video raster format, or a proposed system for describing images in general, or just output targets, or just video hardware targets.
I think the pragmatic approach for the application is it either has specific (optimised) support for the format(s) available, or it uses 32b RGBA. BGR/RGB is a useful and reasonable optimisation that can be baked into some algorithms without cost… but on devices with a GPU any manipulation is far better happening in the GPU – conversion from YUV to RGB ought to be happening in a fragment shader, not in ARM code. This is why I am uneasy about polluting “sprite space” with numerous YUV formats. YUV is a form of cheap compression for video. It is not used for still images, of which there are many far superior algorithms. Sprites (the JPEG-in-sprite debacle aside) are not compressed. I’m not convinced that the laudable desire to support a YUV stream is any better suited by the Sprite format than the JFIF stream was. I realise of course that they are of vastly different scales of complexity and compression, but Sprites are an editable format and one must ask whether anybody would ever choose to edit a YUV Sprite, without converting it to RGB first? I mean sensible people don’t even edit CMYK. I have no strong opinion at this stage, merely misgivings. Thanks for asking. |
Steve Pampling (1551) 8172 posts |
The weird thing is that I’ve noted occassions where I forgot to log out and put the laptop in hibernate, came back the following morning and I was still usably logged in. |
Jeffrey Lee (213) 6048 posts |
An extra thing on the wishlist: The ability to describe/support alpha channels that use premultiplied alpha.
I view sprites as being mini-framebuffers. They support all the same pixel formats as the screen supports (and more), and the way the pixels within a sprite are addressed is practically identical to how they’re addressed in video memory. These properties are very useful when you take into account the ability to redirect screen output to a sprite; a routine which writes to the screen requires little or no modification to also support writing to a sprite. And if you’re using a sprite as an off-screen buffer, you can create your off-screen buffer in the same format as the screen and then quickly blit it across. The ability to accurately capture the screen contents is also useful. So for me, it’s important that sprites continue to support all the same formats that the screen supports (and the ability to redirect screen/VDU output to an overlay is one of the things on my todo list. OK, I’m not currently planning to extend the VDU to support rendering to macroblock-based or multi-plane buffers, but I’d hope I can at least get YUV 4:4:4 and CMYK working) For me, the big problem with sprites is the fixed data structure. Mode selector blocks allow you to specify a practically endless number of properties for the screen mode, but sprites require everything to be specified in a handful of special-purpose words in the sprite header. They’re also restricted to just an image plane and a mask plane, and despite having the useful property of allowing multiple images to be stored in one file, the 12 byte name limit can easily cause problems. So when I come to have a proper look at how to support YUV formats in sprites, I’ll probably be approaching it from the angle of trying to find a way to extend the format of the sprite header, or maybe to add support for a different, more flexible image container format to the OS. |
Rick Murray (539) 13850 posts |
For anybody who isn’t following the idea of planes and such, here is a useful diagram from the datasheet for the DM320 chip inside my PVR:
The two video planes at the back accept video image data YCbCr 4:2:2 directly. Bitmap window 0 can be 8 bit paletted colour (256cols) of RGB16. Bitmap window 1 can be the same or attributes (transparency levels). Only one can be RGB16 at a time; but not if overlapping the video plane. The palette modes can actually be 1, 2, 4, or 8 bits per pixel. Thus – one of the video planes will accept a live video feed. Bitmap 0 will be in paletted colour to be the UI, and bitmap 1 will mark which parts are transparent so the video shows through behind the UI. |
nemo (145) 2552 posts |
Jeffrey said
Indeed, and I had assumed that the GPU supported YUV video by having a YUV overlay, rather than a YUV-only framebuffer. Apologies if the devices you have studied are different. Is it really the case that to allow YUV video to work in a window, the entire desktop must be rendered in YUV? I’m surprised by that. If not, and there is a YUV overlay, then I remain unconvinced that the YUV stream needs to be Sprite format. After all, many GPUs support JPEG rendering, but that isn’t an argument for returning to JPEG-in-sprite.
Hear, hear for CMYK. I did some extensions to PDriverPS a long time ago to provide CMYK support for sprites and the ColourTrans calls (in a particular paradigm – there isn’t a SetCMYKColour call per se). CMYK support really does need palette support, otherwise you get the ridiculous ‘video CMYK’ that bedevils ColourTrans et al. 100% Cyan is not 100% Green and Blue, but I’m back on to CMMs again.
:-) I’ve already mentioned a sprite format that supports up to 510 planes. I didn’t mention that I’ve done a lot of work on long sprite names – !LongFiles style. I have a pretty robust scheme now, except that we have the same problem as extending filetype length – so much software has fixed size buffers, so there must be a different API for the long name, and support a short name transparently. The scheme I have hashes long sprite names into a 32bit ‘unique’ number, which is than appended to the first 7 characters of the long name as 5 encoded chrs, resulting in a legal and ‘unique’ short name that won’t upset unaware apps. Obviously short names stay unaffected. The other feature is automatic truncation on failure, so if something asks for I have tried a number of strategies for storing the extended data. The obvious is a header extension block for which, though there’s never been an official definition, there is a convention: 0 4 Size of sprite area 4 4 Number of sprites 8 4 Offset to 1st sprite – if > 20, there are extension blocks 12 4 Offset to free space 16 4 Identifier (I) for 1st extension block 20 4 Length (L) of 1st extension block 24 L-8 optional data of 1st extension block 16+L 4 Identifier (I2) for 2nd extension block 20+L 4 Length (L2) of 2nd extension block ... The first length Identifiers This convention has been used in the wild by a few authors (and also unallocated FourCC style identifiers as well, what can you do?). Using an extension block for long spritenames is slightly problematic in that renaming a sprite could cause that block to change size, which then moves all the sprites. Renaming has never moved sprites before… but it would only happen with long sprite names. However, the biggest problem is that spritenames have always auto-truncated. Suddenly allowing long names may well cause existing software to misbehave, as renaming a sprite from ‘short’ to ‘notaslongasyoudthink’ would no longer have the same effect. Consequently new ‘long rename’, ‘long create’ SpriteOps would be needed. It is legal to have a spritename in the last 12 bytes of application space for example. :-/ I also tried a version that moved the longname block on sprite load, turning it into an invisible/unselectable sprite (which is easy enough), which then turns back into an extension when saved… except there’s plenty of naughty code that uses OS_File instead of OS_SpriteOp to load and save sprites. I’ve also tried putting the long sprite name in the sprite itself – safest place is between the image and the mask (it can’t go after the mask, and putting it before the image just causes unpleasant palette handling difficulties). BUT finding it then requires one to be able to calculate the actual size of the image which, if it’s in an unsupported mode, is not actually possible (one can guess… but not with certainty). Yes, I tried an encapsulated form that can be discovered by working backwards from the mask or end, in a similar though reversed form of the extension blocks… but groking (and finding) long names during sprite ops is not efficient, hence the hashed approach. It’s not straightforward.
If you’d suggested having a dedicated YUV format, just as we have a dedicated JPEG format, you’d have my enthusiastic and total support. ;-) |
Jeffrey Lee (213) 6048 posts |
No. RGB(A) desktop + YUV overlay. (Although most hardware would let you have a YUV desktop if you wanted) YUV sprites would primarily be a convenience for the software fallback mode of VideoOverlay. It would create a YUV sprite, tell the video player the buffer address(es) (using the same API as is used for hardware overlays), then render the sprite to screen using OS_SpriteOp (i.e. SpriteExtend). Not having YUV sprite support would probably mean requiring VideoOverlay to contain its own YUV → RGB code, or some extra interface to SpriteExtend to allow rendering of arbitrary buffers |
Rick Murray (539) 13850 posts |
Why not have the lookup table of names follow the sprite data (all of them), with a word inserted at the beginning which points into this table? Then it can juggle itself around as required while leaving the sprites as they are until they themselves change (deletion, inserting/removing rows/columns, etc)? |
Colin (478) 2433 posts |
Would an overlay work like a sprite? By that I mean a sprite can be placed on a window’s background and clipped/hidden. would an overlay work like that or would it have to be a rectangle above the desktop? |
nemo (145) 2552 posts |
There is a distinction between what happens in memory and what the format is on disk – the only way to have data at the end of the sprite file on disk is to disguise it as an unselectable sprite. Crucially, the first word of a sprite area in memory – the length – is not included in the sprite file. The length is implicit. So the “end” of the sprite file is just another sprite. The problem with this approach is that sprite editors think they understand the sprite file format, and are happy to move sprites around (just as RO4 Paint does). Then the magic sprite with the data in can be moved, or deleted. Pragmatically the data must be transparent to avoid this kind of difficulty. One can mitigate that by a hybrid approach – allocating an extension block with enough spare room to allow some renaming or new sprites to be created, and then creating an invisible sprite to contain the overflow when necessary, which is amalgamated on save… except, as I mentioned, some programs use OS_File,10 instead of OS_SpriteOp,12. My hope is that restricting long sprite name support to a new set of APIs means that use of the new API requires acknowledgement that sprites can move if renamed. Renaming is extremely rare of course. However, merging of sprites is more common, and that would also cause movement due to the coalescing of disparate extensions (and that should always have been the case, but is beyond the ken of SpriteOp,11). As it happens, the Wimp already discards its caching when merging new sprites, but you can see the implications. |
nemo (145) 2552 posts |
Jeffrey confirmed that he was talking about a YUV overlay, not a YUV framebuffer, thank heavens.
In the same way that JPEG-in-sprite would be a convenience for software JPEG rendering?
No, and only possibly. The fact that JPEG decoding is in SpriteExtend is mostly coincidental – there’s no argument for putting PNG, GIF or Wavelet decoding in there too, in the hope of reusing a small amount of blitting code that could otherwise be accessed via SpriteOp. So I don’t see that YUV is any different at all, it’s just another compressed format, although a somewhat simpler one than those. In fact, I’d argue that PackBits is an even simpler compression format then YUV, but no one is suggesting putting baseline TIFF support into SpriteExtend. I don’t see why a video stream should require an alien sprite type as an intermediate step when no other image format does. I continue to cite JPEG (and by extension MJPEG) as the example. If you wished to support MJPEG video, and the GPU hardware has JPEG decoding available, would you be arguing that Sprites must support JPEG again? I doubt it. |
nemo (145) 2552 posts |
Colin asked
An overlay works the same way as the pointer did in the original Archimedes. The window with the video “in” it doesn’t need any pixels drawn into its window. The YUV overlay sits over it and is displayed instead. Some systems may use the GPU to render the YUV texture into the RGB framebuffer so that screensaves and reprocessing includes the video, instead of having a blank rectangle. Windows has done that since Longhorn (Vista/Win7) to enable the ‘Aero’ gui to have blurred transparent windows sitting ‘on top’ of GPU rendered elements… but it’s drawing each window into its own surface, so that’s a lot easier. |
Colin (478) 2433 posts |
I understand that an overlay is well an overlay I was just trying to get a feel for how it would be used. It would be nice to be able to clip it to the redraw rectangles so it ‘appeared’ as if it was in the window. Pointers can be masked so if an overlay can presumably it can be clipped. If all that I can expect is a floating rectangle above the desktop like the mouse pointer it will be a bit disappointing.
I gather Jeffrey is not proposing doing this at least at the moment? |
Jeffrey Lee (213) 6048 posts |
Really what I’m after is a single image rendering API which can be used with both RGB & YUV images. Under RISC OS, sprites are the standard way of storing images in a framebuffer-like manner, so I was thinking that that extending sprites would be the logical choice. But if sprites aren’t a good fit, then maybe the focus should be on implementing a new API which can take over from OS_SpriteOp? (ImageFileRenderer?)
In the API I’m classifying overlays into two types: “basic” overlays (which appear over the top of the desktop, but underneath the pointer) and “Z-order” overlays (which can have their depth controlled, allowing them to be placed at any depth in relation to the desktop & pointer). Z-order overlays are nice and easy to work with because you can place them underneath the desktop and then either use the desktop’s alpha channel, or a special transparent colour, to mark certain areas of the desktop as transparent so that you can see the overlay that’s been placed underneath it. Basic overlays are annoying because they’ll always be on top. With enough effort I think the behaviour could be faked so that from the user’s perspective basic overlays look and act the same as a Z-order overlay, but I’m yet to start work on that. In all cases it’ll be VideoOverlay which is responsible for preparing the area of the screen which sits above/below the overlay, so apart from a few restrictions on what application can/can’t do, they shouldn’t have to worry too much about whether it’s a Z-order, basic, or software-emulated overlay which is in use. Clipping to a rectangle, scaling, rotation (in 90 degree increments) and mirroring are supported by the API. Some systems may use the GPU to render the YUV texture into the RGB framebuffer Correct. |
nemo (145) 2552 posts |
Colin said
Yes, that’s how it works. Some systems may use the GPU to render the YUV texture into the RGB framebuffer Keep it on the roadmap though! It would be a shame to require software rendering even when the hardware is capable just because the target is a sprite. Low priority though.
I think that’s a much more flexible approach, yes, as it continues to apply to things that are less and less like sprites. There may be room for improvement within that API – I wonder how well a software renderer and the hardware driver can negotiate over scaling for example (ie whether it is better for the software to scale while rendering, or leave scaling to hardware), but that’s finessing. I have not used that API in anger though (it was RO4-specific), so I can’t vouch for it at all… and if any of it seems suboptimal for your video use, it would probably be better to define another API to which ImageFileRender can be interfaced, rather than the other way around – IFR is not performance critical in the way that video rendering must be. |
Jeffrey Lee (213) 6048 posts |
One problem with using flexible descriptors is the problem of comparing two descriptors to see if they’re equal. There are many ways the entries within the descriptor can be ordered, and for some hypothetical image formats there are many ways which it can be broken down into a list of entries (e.g. consider a format where one chrominance sample corresponds to an “L” shape of pixels – there are at least two ways that could be broken down into a rectangle list). To deal with this, I’m thinking that the per-component rectangle should be replaced with a 2D bitmap indicating which pixels that component affects (no more problems with working out how to break stuff down into rectangles), and that we should introduce the notion of there being a canonical form for a descriptor (so that two descriptors in their canonical form can easily be tested for equality using a simple memcmp). The definition for a canonical descriptor would be that:
Use of a bitmap within the component definition will make it a bit harder for software to parse the descriptors, but if the API sticks to using canonical descriptors then that could be offset by the fact that any software which only supports a limited set of formats can just resort to using memcmp to check if the given descriptor is a match for any of its known descriptors. I’m still not sure what to do about describing unusual number formats and other properties. Perhaps another rule for the descriptors should be that any zero bytes/words at the end of each component should be trimmed from the canonical form (and there should be a header field indicating the byte size of the entries), so that we can extend the format to add more attributes in the future (with the default value of all attributes being zero). Or maybe a list of key-value pairs should be used instead. |
Pages: 1 2