Discrepancy between Font_ConverttoOS and Font_CharBBox
WPB (1391) 352 posts |
I used Font_ScanString to get the size of an “A” character – the result is in millipoints. Exact figures (by way of example) were 23460 × 21120. Converting that to OS units using Font_ConverttoOS gives 58 × 52. Calling Font_CharBBox to get the size of an “A” character also gives 23460 × 21120 in millipoints. However, when bit 4 of the flags is set (to return OS units, instead of points), Font_CharBBox gives 66 × 60. At first I thought it was an inclusive/exclusive issue, but the same figures in millipoints are being converted to different OS units. I suspect CharBBox is getting it wrong, but surprised it doesn’t just use the same code as ConverttoOS internally. This is on Font Manager 3.61 (sorry I haven’t tried a more recent version yet). The following snippet shows the problem:
SYS “Font_CharBBox”, handle%, 65, 0 TO , x0%, y0%, x1%, y1% SYS “Font_ConverttoOS”,,w_pnts%, h_pnts% TO ,w_os1%, h_os1% SYS “Font_CharBBox”, handle%, 65, (1<<4) TO , x0%, y0%, x1%, y1% |
WPB (1391) 352 posts |
Well, I’ve been looking into this more, and I partially understand it, but I’m not sure what the correct fix (if any) should be… What happens when you ask for the char bbox in OS units is not (as I mused above) that the same millipoint code gets executed and then converted to OS units just before returning to the caller. The two methods of calling Font_CharBBox follow very different code paths. However, by examining debug output from the Font Manager, I can see that both paths end up with the same values for the unscaled outline bbox. Font_CharBBox (pixels) calls getoutlinebbox to get the unscaled outline bbox, which goes on to call getbbox. After scaling the bbox, getbbox then adds some padding to it: 1 pixel all round for 1-bpp, 2 pixels all round for 4-bpp. (Presumably this bpp refers to the bitmap representation of the font internally, and has nothing to do with screen mode?) Font_CharBBox (millipoints) calls getoutlines_metricsbbox to get the unscaled outline bbox (same result as the pixels version above), which is then converted to MPs and scaled by transformbox, before returning to the caller with the result. In this code path, no padding is added to the bbox, which is where the discrepancy arises. getbbox is used also by Font_Paint. (All functions mentioned are in the Fonts01 source file.) So the question is, is it even a bug that the two different ways of calling Font_CharBBox return different results? (Surely it is.) If it is, what should be done about it? Should the padding not be added in the pixels path (but that will surely mess up Font_Paint and others, so would have to be post-corrected after the call into getbbox in this particular case, or a flag will have to be passed into getbbox to say whether or not to add the padding), or should the padding be added in the millipoints path? Certainly from a user’s POV, finding a 4 pixel difference in the width and height of a character depending on how you call Font_CharBBox is disconcerting to say the least. Can someone who understands the details give some input on this, please? Ben, Jeffrey, nemo, Sprow, anyone else? Thanks in advance. EDIT: I should say that this is all with the latest Font Manager from CVS, and on RO 5.19 and RO 4.42. |
nemo (145) 2546 posts |
No. (pixels) is for finding the size of the bitimaged version of a glyph. (millipoints) is for finding the geometric size. The former should be used for caching and screen manipulation and the latter for typographic calculations. |
WPB (1391) 352 posts |
So you wouldn’t expect, when converting between the two with an OS call, to get the same result for the same character? |
nemo (145) 2546 posts |
No I would not expect to get the same result. I would expect the (OS units) call to give you a slightly larger BB than the (millipoints) converted to OS units (even rounding up). The two forms of the call are for different purposes. |
WPB (1391) 352 posts |
I’m not trying to be a pedant – I’m just trying to understand the reasoning here. If the bounding box is defined as “the smallest box necessary to cover the character”, surely that’s what the call should return, in both cases. If you accept that statement, then surely the difference between the two results when converted should be at most one pixel? I appreciate what you’re saying about the two units having different uses, but the fact is the (OS units) call is not returning the bounding box of the character; it’s returning the bounding box of the character plus a bit of padding. Sorry to bang on… |
nemo (145) 2546 posts |
The behaviour you are not allowing for is hinting. Bitimaged glyphs are grid-fitted and hinted to a particular resolution. The vector form (millipoints) will not (should not) be performing that stage. |
WPB (1391) 352 posts |
Yes, I figured hinting would come into it. In truth, if that is the reason for the discrepancy, then the FM is taking shortcuts, as it just arbitrarily adds 2 pixels all round for the bitmap version of the call. It doesn’t actually calculate how many pixels the hinting requires. |