Chars
Doug Webb (190) 1158 posts |
2.01 works here as well with my fonts installed, 63 font families. Thanks Chris for your efforts and support and also to ROOL their continued support. |
Chris (121) 472 posts |
Glad to hear that the problem appears to be fixed – many thanks for the feedback. |
Sprow (202) 1155 posts |
Draw 1.30 and later now handles editing the string as you type. While implementing that it did occur to me that subsequent edits (Ctrl-E) would be better done in place, because at the moment it opens a Wimp writable in the desktop font which may look nothing like what ends up in the diagram (eg. Selwyn font). Volunteers sought to do ticket 426 that nicely isolated bit of work. |
Rick Murray (539) 13806 posts |
If you are using RISC OS 5 and can hack Chars to output Unicode whilst the machine is still in Latin1 mode (it’s a minor tweak), you can test using Ovation (in UTF mode, hold Alt as it loads). Completely good with cursor movement and editing. This is how it should be done. Two apps communicating in Unicode without reference to the underlying “alphabet” setting of the OS. Plus, without changing alphabet, it won’t arbitrarily break everything else that has been translated into non-English. It shouldn’t be either/or, it should be old apps and new apps. |
Jeffrey Lee (213) 6048 posts |
One downside of the new !Chars – because it doesn’t show ASCII codes 0-31, you can’t use it to (e.g.) enter newline characters into documents/task windows if you’re stuck in a situation where you don’t have a working keyboard. |
Chris (121) 472 posts |
Ah, OK. What’s the best fix for this? Just reinserting the old control codes as before? Or maybe adding some buttons for the most useful ones (carriage return, escape, tab, etc)? |
nemo (145) 2529 posts |
You. Don’t. Say… (You may wonder quite why I should be generating Draw files encoded in EBCDIC… and the answer is Because I Can)
You’ll be surprised to learn that I disagree completely. Short version: Programs need to know what byte value 192 means. Long version: My current plan is to supplant country/territory/alphabet completely (whilst retaining them for compatibility) and simply override what old apps do via MessageTrans, WindowManager etc transparently. It is non-trivial but necessary to fix what has never really worked: Consider a theoretical Québécoise trying to use RISC OS to run both old and new programs: She lives in Canada, speaks French, wants to use Zap, and upgrades to RO5. As was, she’d have been tempted to configure her Country as Canada1. Trouble is, how many programs shipped with resources for country 17? If there’s no Canada1 resource in the program, it’ll default to British English. So, resigned to the fact that her machine will believe she uses Euros, she had to select Country 6 – France – so that she can read stuff in her native tongue. Naturally she had to fix her clock, too. But Zap, StrongHelp et al displayed help in la belle Française, so ça roule. She may have blinked slightly when RO5 suddenly decided that anything that talked about the dieresis or acute accents suddenly spoke about š and ž instead, but c’est la vie, n’est pas? That is to say, RO5 changed which Alphabet is used for Canada1/France – i.e. which 8-bit Encoding was used. So the same character codes in all the help files would now display different characters (well, a few of them). As you know, I have proposed using IETF language codes for internationalisation. Our Québécoise user could fine-tune her language preferences, but if she just chose “Canada1” that would auto-select “fr-CA,Canada1,17,fr,France,6” as her language preference, but EST as her timezone, CAD as her currency, and so on. The outdated Country would be set to 17, and old applications would try their best, but when they try to access their “Canada1” or (more likely) “UK” resources, the new system automatically searches for “fr-CA” first, trying each element of the language preference string until it finds an appropriate file, and substitutes that. There are only a few idioms in use by applications, and they’re pretty easy to spot when you know the Country number they’re trying to accommodate. However, the French example hides some additional complexity – let’s take a polyglot example I’ll call “Martin”. His native language is German, but he can greet you in Hebrew and trade insults in Russian. His language preference string would therefore be hand-tuned (as there ain’t no Country number big enough for our Martin) – something like “de-DE,he,ru” (it’d be longer than that, trust me). This may mean “Germany” for one program and “26.Messages” for another, but he can handle either, which is great… However, those different files are in different Alphabets. Different encodings. The only way to make them play together is via the magic of Unicode… delivered via the sensible-sized medium of UTF-8. I think we’re all agreed on that. So when Martin runs a program that calls ReadC (or gets a Wimp keypress message) and gets 192… what is that program to think? Is it a Latin À, or a Greek ΐ, or maybe a Cyrillic Р? Or is it the first half of a Unicode in UTF-8 format? The only way it can tell is by the Alphabet number. That’s definitive. Country numbers are next to useless, but Alphabet numbers work fine. In my system I have a FallbackAlphabet too, which is used to interpret bytes that are clearly not part of a valid UTF-8 sequence. |
Jeffrey Lee (213) 6048 posts |
Just displaying them as before will probably be fine. Since we now display the character name when you hover over it, that should be enough to help people find the one they’re after. |
Chris (121) 472 posts |
OK, I’ll see if I can put them back in. Sadly I don’t seem to get any time to program at all these days (family, work, life…). I’ve got a list of fixes for SciCalc to look at too – including some enhancements you sent me ages ago… Where does the time go? :( |
Rick Murray (539) 13806 posts |
And if her operating system where things were properly configurable and not just a selection of potentially inappropriate hardcoded choices, then she wouldn’t have all these issues… |