TTF Fonts
Pages: 1 2
Clive Semmens (2335) 3276 posts |
8~) It did however enable us to produce perfectly good-looking text in every Latin alphabet language, as far as I know, except Vietnamese, with fonts containing only (iirc) 224 glyphs (? word choice?). Also, but with different fonts of course, with Greek, Cyrillic and Deva Nagari alphabets. Never had occasion to need any others. From my point of view the worst thing about it was the need for those bloody kerns in the DTP files, rather than it all happening automagically in the rendering software. Also of course that the fonts contained all sorts of weird things in completely nonstandard locations…but what can you do when you’re not allowed more than 224 (or whatever it was) glyphs in your fonts? |
nemo (145) 2529 posts |
Indeed, it is for precisely for all those reasons that we now have OTF fonts. Though it does lead to t̢̧̠̫̀̄̈̊h̘̝̺̻̎̐̔͐í̖̘̟̮̂̇̏n̠̥̫̺̄̊̋̐g̞̮̼͇̃̍̓̚s̗̝͓͙̄̅̈̑ ļ̝̮̯̐̑̿̕i̺̼͉͖͐͑͒͗k̖̗̘̙̂̃̉̌ȇ̮̮̮̮̑̑̑ t͇͇͇͇̿̿̿̿h̡̢̨̧̋̍̒̚ị̤͈͍̇̈̋̎s̶͜͟͢͝͞͠͡ |
Rick Murray (539) 13806 posts |
What’s that? A translation in Ar Ciela Hymmnos? ;-) |
Clive Semmens (2335) 3276 posts |
It worked… 8~) Happily I don’t need to do any of that stuff any more, and what I do want to do seems to work okay in LibreOffice with the fonts I have, and in HTML5. Not sure how much of it works in RISCOS on the Pi, but I don’t do anything in foreign languages on the Pi at the moment. I probably would if I could have IKHG back, and I’d make time to get to grips with the actual workings of things again – typing in foreign languages on the Mac is horrible. |
Clive Semmens (2335) 3276 posts |
Incidentally, that “things like this” is horrible and I presume is the result of combining accents without tables of how the damned things ought to be positioned. With 224 entry font tables I couldn’t have Unicode, but at least I could position diacritics appropriately in those combinations that actually occur in real languages*, albeit only within one family of applications (Impression) not across the OS. * Apart from Vietnamese…and that “things like this” approach can’t do Vietnamese properly, either, pretty obviously. Unless it has an even greater plethora of supernumerary pre-positioned diacritics than are visible in the example… |
Chris Mahoney (1684) 2165 posts |
“Things like this” flickers on my iPhone. Horrible? Yes :) |
nemo (145) 2529 posts |
Assuming you’re not referring to Chữ Nôm characters but rather Vietnamese Latin script, what do you mean? I can’t imagine a scenario that cannot be handled in OTF. Note that ‘isn’t handled in common fonts’ is not the same as ‘can’t be handled if you wanted to’. |
Clive Semmens (2335) 3276 posts |
I was referring to Vietnamese Latin Script. I’ve no idea whether OTF can handle it (nicely, that is) and I quite believe you, but there’s no evidence of such a capability from your sample. But then, why would there be? For the uninitiated, if any such are interested, Vietnamese Latin script uses multiple diacritics (accents etc.) on some letters, and rather than making great ugly heaps of them as in Nemo’s sample above, they’re made a bit smaller and carefully placed in pretty patterns so you can actually see them all perfectly distinctly. Unicode handles this perfectly well by the simple, if cumbersome, method of having a font table entry for every legal combination. How well or badly actual fonts implement these, if at all, is up to the font. They might use combinations of glyphs from elsewhere in the font (combining accents, typically), or they might have the entire combination defined locally. My system had a table of how to make every legal combination in other languages by placing diacritics correctly on each letter using Impression’s “kerning” mechanism (which is rather more flexible than metal type kerning!) To handle Vietnamese, it would either have needed additional, smaller diacritics (for which there was insufficient room in the 224 font table limit) or altered the size of the diacritics in Impression as well as kerning them into position. Never needed to bother, as we didn’t have any Vietnamese authors. In more recent years I’ve acquired a number of Vietnamese friends. |
nemo (145) 2529 posts |
Yes. Combining accent positioning works by having a number of attachment points around the base glyph, sorting the attached accents into Unicode order, and then applying them to the base. Every time an accent is attached to one of the points, that point adjusts for that particular accent. This works well for the generic case (and is grossly misused in my example above), but cannot prevent accents on different attachment points clashing with each other. This would be a highly unusual case. However, at that point other OTF functionality steps in. The GPOS table can contextually reposition glyphs (the exact kind of kerning Clive was having to do manually), and the GSUB table can replace entire runs of codepoints with alternate glyphs – this can be as simple as a precomposed accented glyph, or in the Vietnamese case can replace the pair of co-located accents with a combined form. These substitutions can be done contextually as a ‘feature’ – ie only when setting Vietnamese, or when using ‘old style’ ligatures, or whatever else the font designer may wish. Unfortunately, on Windows it can also depend on which language your OS is set to, which version of the OS you have, and even where in the world you bought your computer. So that’s not ideal. Note also that the more sophisticated the font is the less application support there is for the features it contains – Adobe products tend to have the best typography support, other apps may have a much more restricted font UI. Also note that Apple went their own way with their own typographic font tables, so Apple fonts on Apple OSes may well behave differently when used elsewhere. This is why artwork formats such as PDF are not as editable as the original document – localised settings should NOT change how any part of the PDF is rendered. |
Clive Semmens (2335) 3276 posts |
Bloody hell! I wasn’t doing it manually! Sadly I couldn’t build it into the OS, nor even into Impression, but the input filter I wrote that accepted any kind of word processing file (and there were dozens of them in the early 90s) produced DDF files for import into Impression – with all the accents already positioned, all the necessary kerning automatically generated. If a copy-editor had to put accents in later for any reason, they had to export DDF from Impression, drop it on the filter, and reimport it – and the filter did all the accent positioning for them. One accented character or five thousand of them, done in a fraction of a second. |
Rick Murray (539) 13806 posts |
It’s an interesting concept – you created a bodge (clever script for Impression) that associates accents with characters in a way that a more capable font rendering system ought to be able to do (semi-?)automatically.
I can understand older versions being less capable (I had to install a monochrome emoji font into XP in order to see any of them), but the other stuff (depending upon language and geographic location), does that still apply to Win10?
There’s a surprise.
Ideally we’d have a situation where the application says “this bit is But… come back to this part:
If the application itself has to do a chunk of the work, how is this that different to Clive’s filter? Okay, it’s not Impression itself, but it’s something that helps generate rules to mean “do this, then put this here” to get Impression to create the correct output. That’s not the same thing as “here’s some text in a weird language, display it”.
It is kind of funny in Android – looks like patterns on early Mesoamerican pottery. |
Clive Semmens (2335) 3276 posts |
That’s exactly how it looks on Firefox on my Mac. |
Clive Semmens (2335) 3276 posts |
So could I. But there’s so much more about Microsoft that annoys me that I’ve given up on it and now put up with Apple – warts, crabs, scabies, aids and all. |
Rick Murray (539) 13806 posts |
Firefox on Android 8.something: |
Steve Pampling (1551) 8155 posts |
I said for years that the failings and need for support that MS generates has ensured I can pay the mortgage. |
nemo (145) 2529 posts |
And if anyone changed the font it would all go wrong. Glyph composition should either be in-font or font agnostic… this is neither. I realise you had no choice on this OS.
Not only does it still apply, but last time I checked there’s no reliable way to find which features have been switched on automatically by the OS.
And that’s exactly how it works, except instead of <language> you have any of the features that the font supports.
Indeed, and vice-versa. Without going into the complexities of Unicode Canonical Form, the font should be able to cope with a grave attached to an a-acute, or an acute attached to an a-grave, or an acute and a grave attached to an a – it should make no difference.
Put simply, the individual font says “I can do these wizzy things”, and then in the font selection window you would switch on the features you want – that might be old-style digits, or swatches on capitals, or fancier ligatures. However, language settings in the application may also switch on some features (which script is being used, whether it’s vertical), and that’s automatic but obvious. However, other features may be switched on by the OS because of locale settings. The applications do not tend to tell you that this has happened, because they don’t know.
FontManager does not have combining accents. Therefore these are just glyphs being rendered by the application. NetSurf is failing to emulate the movement of the attachment points that OTF fonts achieve. It shouldn’t have to be emulating anything of course, it should have a modern font engine. What does it do with web fonts? Ignore them, presumably. There’s little point examining my silly example closely – it’s not a real usage, so different fonts will produce different effects. Also real Unicode text handling requires a stage called glyph shaping, which is a character code reordering and replacement at the Unicode level before the font even gets a look at it. That can vary from app to app and OS to OS too. Harfbuzz is recommended for glyph shaping, it’s insanely complicated and you’d be mad to attempt it yourself. |
nemo (145) 2529 posts |
I’ve mentioned ‘font features’ many times without really explaining what they are. I’m going to remedy that now, but you’re not going to like it. Grab your support chemical of choice (water is good), sit comfortably, and then feast your eyes on the first of five pages of features that Microsoft know about. There are undoubtedly more. TLDR (which could also be a feature) here’s an example feature which is often useful: c2pc Petite Capitals from Capitals This turns all-capital acronyms into small caps automatically (if the font has that feature; if your app recognises that; if it gives you a user interface to select it; if you switch it on for that region of text). However, Feature is just the third of the variation selectors – Script and Language can also play a part. That MS website also lists the script and language tags available. Modern fonts are complicated and highly capable. FontManager is not just not in the same class, it’s not even on the same planet. “Punching above its weight” it does not. |
Steve Pampling (1551) 8155 posts |
That, most definitely, counts as drier than the Atacama Desert. Anyone who starts on water will be on 100 proof spirit before the end. |
Clive Semmens (2335) 3276 posts |
If you mean “changed from one font to another,” no, as long as they were fonts used in our Journals, all was well – the filter recognized which font was in use from the DDF, or defined the choice of font itself on initial import from author’s files. But obviously if someone changed to a font I’d not set it up for, it wouldn’t work, no. I’m not a magician. They’d have been breaking house style anyway, so it really wasn’t an issue. “If the font doesn’t have that feature” is exactly the same for any system.
It wasn’t me who said it did, of course. But it wasn’t bad compared to what else was around at reasonable cost in the early 90s, and was more easily tricked into jumping through hoops than other systems. I wrote most of my apps in BBC BASIC, with the odd bit here and there in assembler where speed was called for. The whole system was copy-editor-friendly in a way which other systems still don’t seem to think about at all. When you’re producing one or two of these a month, it’s worth making the copy-editors’ lives as easy as possible: http://clive.semmens.org.uk/Photos/Assortment/JPhysiol.html And yes, half-tones like the ones on the front cover there (spot colours, we couldn’t do CMYK) could be done on Calligraph printers okay – but photos of tissue samples and things like that were way beyond our capability, even in monochrome. |
Andrew Rawnsley (492) 1443 posts |
Clive seems to have “got” what I meant. Modern fonts are complicated and have lots of great features (which apparently most software can’t cope with from what Nemo says – urgle!) – don’t think anyone’s denying that (nor suggesting that RISC OS can begin to compete, or even attempt to compete in those areas). However, the phrase I used was chosen with care “punching above its weight”. It’s a boxing expression, where weight classes ensure that competitors fight those of an equal standing. In this case, I was talking about what was available in the 80s/early 90s when FontManager was designed – I have horrendous memories of ATM, and bitrstream solutions in DOS/Windows from that era, for example. I recall leaving a computer running for about a day (or more) pre-calculating bitstream fonts for (Wordstar 6 maybe?) and they still looked awful. What fontmanager did was give nice on-screen and print results, with great (well, for the time) performance, even on a 8Mhz ARM2. It was possible to edit on-screen with nice fonts and print them most available printers (OK, drivers have always been a necessary evil), and end up with something that looked highly presentable and extremely professional. Not only that, but by making it part of the OS, it was standardised across all RISC OS applications (more or less), which was also very important (in a similar way to the inclusion of the draw file format as part of the OS). By contrast, I recall messing around with ATM on Macs (it was worse on DOS/windows), which still came with multiple bitmap fonts, and performance was woeful, and quality highly variable. In part this may be down to software – most of the “visual” software (ie. with good layout control) didn’t provide much in the way of editing facilities, so you generally needed to prepare text in one app, then lay it out and try and make it look pretty(ish) in another. Now, the RISC OS fonts and documents can still be used today, which is nice. And, for all the talk of accents, as a UK English user (which is what it was designed for, let’s be under no illusions) it still produces results that non-RISC OS users find highly agreeable/presentable etc. No-one is making any claims to its internationalisation or support for clever modern features. Just that it still works (nicely – try it in high dpi), runs like the clappers, and provided your needs fall within its limited remit, it remains very useful/usable. Oh, and it has just (in part) secured a contract for 10s or 100s of thousand RISC OS “front facing” units, which will benefit the OS and its users in many ways to come :) The same can’t really be said for its contemporaries. Also, remember we just had “fun” with Freetype in the browser project. Ended up having to add third party software “on top” in order to get any semblance of usable performance, requiring (at a guess) 50-100x the amount of code to be crunched. But hey, we get to display web fonts and stick accents in unsightly places, so all is good ;) Nemo’s right though – we do need a solution that offers better internationalisation. It probably doesn’t want grafting on to what we have already, as it’ll likely just further break compatibility for no good reason (there are already issues/oddities using UTF fonts on earlier OS versions). Also, adventures in UTF show us that applications need to support the new features, so probably better to have a new module that is soft-loadable on older systems to avoid the platform split that occured with the RISC OS 5 unicode version. And any solution needs to come with an updated !Printers that can cope with it, because we don’t want a repeat of the whole “can’t print from Netsurf (?) because it is using UTF8 fonts” fiasco. |
Michael Drake (88) 336 posts |
Andrew Rawnsley wrote:
I think that the ROOL !Printers has been updated to have Unicode support. It’s just the PostScript printer driver that comes with RISC OS that lacks Unicode support. There’s a commercial PostScript printer driver with Unicode support available from MW Software. |
Michael Drake (88) 336 posts |
Would it be possible to add support for RISC OS fonts to Freetype2, and re-implement the FontManager module API with freetype/harfbuzz? That way existing applications could work without changes, newer apps could take advantage of extra features, and better font formats could be supported, while keeping a consistent font rendering across the desktop. Oregano always used to stick out, because its text rendering looked completely different to the rest of the desktop. |
Rick Murray (539) 13806 posts |
Yes, I caught that too. :-)
I think we all know what that means – it is how we’d describe the United Kingdom as a member of the European Union.
Ah, the lovely Borland BGI fonts.
Yes, FontManager was pretty amazing in the late ‘80s and early ’90s. However the rest of the world has moved on. You know it’s getting silly when a little multimedia player (Creative Zen) can speak Japanese, Chinese, Hebrew, Russian… and cope without a hiccup with song titles in any of those scripts while set to “English”.
One could say the exact same thing about Windows’ GDI. Certainly, the on-screen fonts looked rubbish until XP and ClearType came along, but generally speaking what one saw on-screen was what something looked like in print.
A lot of old stuff can still be used today. I downloaded WordPerfect to stick on DOSbox just to remind myself how bad it was…
And there’s the crux of the matter. Basically it’s great if you speak English, it’s mostly okay if you speak a language with the usual Western European twiddles. And if you speak something else or want to mix up other language content, you’re S.O.L. The thing is, the internet is helping to break down these mostly artificial barriers. What was a British system mostly sold to British people now has a shot at global appeal, and that’s when such things as “it can handle some sort of Unicode but normally doesn’t” stand out as limitations.
As long as we have that noose around our neck, we’re going to struggle to go forward. You know, there’s a pile of software available for older systems. And that which is available now is going to need to be updated to support any sort of serious attempt at Unicode. That doesn’t instantly obsolete existing software on older operating systems.
That seems to me to be Jeremy Coryn logic. A yes-but-no-but-yes-but-no that means another year passes and nothing happens. If anybody is going to work on font rendering in the future (which I’d be surprised given how little has happened since Acorn originally wrote it), I would like to ask the author to concentrate on RISC OS 5 first and worry about older systems as an afterthought. RISC OS 5 is free (and now OSI open source) and the lowest spec machine is dirt cheap and can run rings around any RiscPC. There’s literally nothing other than “my favourite killer app that I can’t do without” (assuming Aemulor – which also is free these days – can’t help) and beligerence stopping a person from upgrading. It’s a ridiculous state of affairs when plans for our efforts to move forward are always tempered by an obligation to provide continual support for a quarter century old operating system.
That’s not !Printers. I can print Japanese from Ovation to an fx-80 without problems. It’s because the PostScript driver is rubbish and gets UTF-8 horrifically wrong. However…
The real fiasco, if you ask me, is that FontManager does not support fallback. If one switches to UTF-8 mode, immediately anything non-English will fail to render correctly. As it is right now, the OS can be UTF-8, or not.
The fonts always seemed just a smidgen too big, in my opinion. |
Alan Robertson (52) 420 posts |
I’m going to bite. Are you able to divulge any more information on this tasty morsel of news? |
Andrew Rawnsley (492) 1443 posts |
When we made RISC OS Open Source last year, one of the big motivators was to remove the per-unit barrier of entry that the old Castle licence imposed. Deals like this one were what we had in mind with that. Just because the OS is Open Source doesn’t mean that customers won’t pay for additional work to happen as part of a larger project. What is often needed is a solution – in this case, RISC OS had already proven itself on a smaller scale, so when the project expanded to customer-facing units (ie. boxes that users will interact with), they were given a choice of continuing with RISC OS, or moving to Linux or Windows. RISC OS was selected because it was shown to be able to produce good results, easily and cost-effectively. We’ll probably have a more extensive press release in due course, but basically this is (one of) the things that the RISC OS Developments team have been working towards. |
Pages: 1 2