FP support
Pages: 1 2 3 4 5 6 7 8 9 10 11 12
Rick Murray (539) 13850 posts |
You mean like a variant data type (it can be anything depending on how you use it)? I can understand how it can be simpler for a casual user than to deal with things like integers and reals and floats (if different) and strings and… Isn’t it so much nicer to read an entered (string) value and add a number to it to print the result without casting and conversion? On the other hand, as a more experienced programmer, I wonder if this could lead to many misunderstandings, code not behaving as it should because you and it disagree on what <var> is at that particular instant? I liked VisualBasic’s approach. “Variant” was a type. You could use it. But, if you thought the idea to be horrendous and prefered to use typed variables, they were all there too. As for the capacity, a Variant could hold the same precision as a Double, in twice the storage (16 bytes per variable, as opposed to 8). As a string, it could hold 0-(~2 billion) characters in twice the length as a variable sized String (22 bytes + string, as opposed to 10 bytes + string).
I think lua.org would disagree – second section down on the right side of the “About” page. ;-) This is one of the interesting aspects of Lua. It can be a language, but it can also be an embedded interpreter. It may be that Lua could take over from where perl used to be fairly common? I’ve noticed Lua in various things. I may be wrong, but I think Blender uses it, for instance. This would be a good thing for end users too, as knowing Lua means that skill could be applied to all of the software that uses it (plus as a language in its own right, like RiscLua) instead of needing to get to grips with the peculiarities of some custom probably-C-like languages that used to be built into big programs. Enough like C to look familiar, enough unlike C to be extremely frustrating. |
GavinWraith (26) 1563 posts |
Like: values, not variables, have types, with the nil type a common subtype of all types. In such a system JITs have an advantage over static compilation in that when they start compiling they have had time to find out what types the values have. There has been discussion and some preliminary experiments in making Lua more statically typed. The chief use of variant types is in the convention that functions return nil and an error value (multi-returns, remember) to signal failure. So, for example, in RiscLua the sys function (analog of BASIC’s SYS) returns the values in registers R0-R7 on exit from the SWI if successful or otherwise nil followed by an error message. It would seem from discussions in the forums that a lot of people would like Lua’s syntax to be more C-like. That array values start at 1 rather than 0, in particular, has always raised a lot of horrified complaint. Tables in Lua can have any non-nil value as a key, but keys that form a consecutive sequence of integers starting at 1, have special status. In particular, their values are stored consecutively in memory, but do not tell the users because they are not supposed to be thinking about such implementation details. Knowing Lua has long been a skill appreciated in the games-programming world, I am told. |
David Feugey (2125) 2709 posts |
Ok, So Status quo. Same situation as two years ago. |
Colin (478) 2433 posts |
It always surprises me that people think this way. Why get upset when people are doing what you have given them licence to do. If they are not happy they should have used a different licence – crazy. |
Fred Graute (114) 645 posts |
@Jeffrey
It looks like you’re right. It’s a bit misleading though as it looks like a valid instruction but when you try to alter the instruction in StrongED the BASIC assembler complains.
I didn’t, as there’s not much RISC OS code around that uses these instructions I resorted to using some DAs to get some of the more exotic encodings. :-) @Steve
VNMUL & friends are exactly the same as VFP VMUL & friends. The only difference is that the result is negated before being written to the destination register.
There is a trailing space after the #Prefix command and that’s causing the problem. Remove it and the manual works fine with StrongHelp 2.88a1. Don’t know yet why this is different from earlier verions. |
Rick Murray (539) 13850 posts |
Perhaps the problem is that we’re also dealing with some pre-UAL sources and some UAL sources (Google VFP returns an assortment)? Here’s a handy conversion chart, VNMUL is in there too : |
Steve Fryatt (216) 2105 posts |
Firstly, we don’t know that they do think this way. Second, the issue raised was with distributing a ‘built’ version of the SharedUnixLib module. There’s a world of difference between me building a copy for my own use, and building a copy to distribute to other people. Both are perfectly valid and permitted by the licence, but I’d hope that any competent developer could see why it might be helpful to avoid the possibility of several different versions of the same module, all calling themselves “1.13”, ending up on various non-technical users’ computers. Especially a module which potentially affects all other third-party software on those computers which is linked to UnixLib. Then again, I’m not a RISC OS GCC developer. Perhaps someone could, you know, talk to them? |
Colin (478) 2433 posts |
Sorry but I disagree with this. If you want to stop different distributions use a different licence. The chosen licence gives everyone equal rights. Just because group of users claim more rights doesn’t mean they have them or that they should be take notice of. For me it’s nothing to do with whether forking is a good or bad thing. |
Jeffrey Lee (213) 6048 posts |
So I suspect that the debugger is disassembling it correctly, and it’s actually the instruction which is at fault. Well, the debugger is only doing what the ARM reference manual says :-) Those instruction words decode to those assembler mnemonics, but they’re unpredictable due to the bad register count, hence the ‘Unpredictable’ comment placed next to them. One of the ideas I had for a newer version of Debugger_Disassemble would be control over whether unpredictable instructions are decoded to mnemonics (as they are now) or treated as undefined instructions. But that plan hasn’t quite come to fruition yet. |
David Feugey (2125) 2709 posts |
It’s really a strange situation, because, in open source world, advantages of a ‘release often’ model are now known and accepted. There is nothing against a 1.13 version. And if we find bugs (operation that will be easier with a release, really), make a 1.13.1. VFP and Neon support work, but you don’t have the right to use them for the benefit of other people. Should we make, as with Doom, a tool to permit people to build their own ShareUnixLib? Sorry to be sarcastic. For DDE, I would love to get support of VFP, as all the OS will speed up (a bit). IMHO, it will be an huge step forward. A partial-VFP version of FPEmulator would be really great too, for everyone and all software. VFP enabled version of Basic would be fantastic. And, of course, a VFPEmulator would be cool for older systems. But, TBH, I don’t think it is a priority. To compile the code twice (VFP/FPA) is not a very big deal. And it’s really better than to have only one choice: FPA (so FPemulator, which is even not accelerated). So 4 tasks: |
Steve Fryatt (216) 2105 posts |
You’ve now claimed this a couple of times; I’ve seen no evidence in this thread that the UnixLib developers have said anything of the sort. Could you point to some evidence? My view, as an — I hope — responsible RISC OS software developer, is that I wouldn’t want to distribute non-standard versions of any shared module with my software. Once they’re out in the wild, that’s it: there are plenty of posts where people offer to send others the version of something that they have on their machine, which can keep something unofficial circulating for years (especially if it looks like the real thing to an end-user). If the version I’ve built and circulated happens to have bugs in it that break other software, how do non-technical users tell it apart from an “official” bug-free version that happens to carry the same version number? RISC OS’s poor handling of different module versions doesn’t help. Of course, the license allows me to do whatever I like (as long as I comply with its terms). Some of those things don’t — in some very specific cases — seem overly useful to end users or other third party developers, and so I choose not to do them because that’s what seems best to me for the platform. You’re free to disagree, of course. I might not be happy if you released your own build of a shared module (especially not if a bug in it that wasn’t in an “official” release caused me to waste time trying to identify a bug that a user reported to me as being in my software); that’s very different from my saying you can’t do it. So far, I’ve only seen suggestions that the GCC team might not “be happy” with such a course of action; nowhere have I seen a suggestion that they would try to stop someone going ahead if they wanted to. Again, personally, when I’ve modified components of GPL’d projects in the RISC OS world, I’ve fed the changes back to the maintainers for release. So far, that approach has always been met with courtesy and helpfulness — and minimised confusion for end-users. If you want version 1.13 of SharedUnixLib out in the wild, the best thing to do in my opinion is to ask the developers. They might just do it for you there and then, or there might be a good reason why it hasn’t been released yet. |
Steve Fryatt (216) 2105 posts |
It’s a module. I think you’ll find that RISC OS doesn’t like version numbers like 1.13.1: a lot of the logic in that area seems to have been based on at best a release cycle determined by floppy disc and a postman. |
Rick Murray (539) 13850 posts |
I think you highlighted the wrong word. The key here is shared. It may work on the system of the person that modified it, with the software they wrote – but it needs rather more care and attention to check that it should behave equally well with other software and on other systems.
Well, unlike other systems, RISC OS was only designed to have one version of a specific module resident at once; Acorn emphasised a high degree of backwards compatibility, so we don’t run the risk of – say – three different programs loading up three different versions of CLib (or whatever) because “that one” the one that a program will work with.
Exactly. If a buggy SharedUnixLibrary is released, who d’you think is going to get the hate mail?
Well, let’s see:
Given the number of things on riscos.info and the GCCSDK site, anybody with a brain can discount the first two options. It would work to everyone’s benefit to have better back end software backing up the GCC compiler suite. I’ve already demonstrated the massive difference between FPE and VFP. It could possibly be option 3. It is summer time. If any of the developers have families, it might be easier to put the code aside for a while than to explain to wife and children that daddy can’t do that because… actually, I’d come over and slap some sense into you, a nice summer holiday and your children growing up/doing stuff/etc is a one off event. First day at school? First time she susses how a bike works and doesn’t fall off? First… you get the idea. It’s a little more important than whatever your compiler just said… ;-) But, the real reason, is probably going to be a lot of the final option. Consider, for example, this post that points out that different processors have different types of VFP. Consider whether or not the machine has NEON at all (does the Pi?). Consider the necessary exception handling, not to mention the facilities that are supposed to be dealt with in software and not hardware (square root is commonly handled in software). Consider the use of contexts, switching too often may hurt performance, not switching enough might mess up other programs. Consider all of the machines upon which the software could be used. The Iyonix? I believe the XScale does not support VFP. Rule out the Iyonix (and earlier)? Emulate it all? Or trap and try to fake using equivalent FPE instructions? Who knows. I’m just pointing out that “adding VFP” is not as simple as changing a few things here and there.
Neither do I.
Which is, given the era (1986ish), pretty much on the money. |
David Feugey (2125) 2709 posts |
Let’s face it: official software is never bug-free.
That was not my point. I just want to say: if a version has bug, publish another version to replace it. Stupid idea, but if developers feel it’s really buggy, they could make a SharedUnixLibBeta module, to compile beta software with. So everyone will now that eventual problems are not theirs.
Hum, current version on riscos.info is 1.12-1 :)
Of course, but does this problem is linked to GCC or SharedUnixLib? If it’s the lib, yep, there is a problem (any bug will be GCCSDK team bug). If it’s GCC… hey, that’s the developer of the software that’ll take risks (and be guilty for this). The real problem is not to wait. The real problem is that VFP+Neon software compiles now for months (I could even say years) on RISC OS, but the solution is still forbidden for real builds. Some people even leave RISC OS because of this. Example: FastDosBox. A better (more modern, faster, etc.) version was on the road, but no distribution = no product. So Linux only. It was 2 years ago. Yes: 2. And today, we’re still stuck on this. I absolutely don’t care to be honest (I care only of Basic), but I can understand the frustration of some developers, as Colin. That’s really a bit long :) |
Steve Pampling (1551) 8172 posts |
Rash assumption. Just because the support module is beta does not mean that any bug must lie in the beta module, rather it means you have to check both the module AND the programme in use for bugs. At least with a full release it’s more likely to have had extensive testing against a known test suite.
The real problem may be a combination of what Rick describes (lack of time) and what Steve pointed at:
Which sort of implies that if the person talking knows enough about the issue to suggest a fix then maybe they could be part of the team. I’m not suggesting work for people, but sometimes if you want a job done you have to do it yourself or at least help out. |
David Feugey (2125) 2709 posts |
Yes, I know… but it’s a bit easy too. So users have no word to say until they become GCCSDK developers? What I see is people that play the game with the rules. They did not want to fork the module, and so their products are delayed/cancelled. I did test some of these products, as a really better version of PC emulator, tools to play bigger videos or emulators that work at much faster speed. All of this is really stable. But not online (and some will never be). I can understand that there are still bugs. But one day, you must put in balance benefits (all these software) VS risks (bugs). We can wait, and many people did. But, really, 2 years? It remembers me a sad story about Win32 version of GCCSDK. They were pressure around this, with some good arguments around licenses. But also bad arguments. Some people suggest that there’ll be some official Win32 release, and so it was not fair to make a fork/port. We just stopped it. 15 years (yes 15!) latter, where is GCCSDK for Win32? I’m a bit sarcastic here, but nothing against GCCSDK team. You know, I could just opted for a fork, as some other people could today just deliver their own version of SharedUnixLib. But eQ choose to play the game with the rules of the community that make this work. We’ll not change our point of view, and if we need to wait another 15 years for Win32 GCCSDK, we’ll wait. But I think we have the right to tell that’s it’s a bit long. Or not :) |
Steve Pampling (1551) 8172 posts |
I wasn’t aware that there was a hard a fast line. |
David Feugey (2125) 2709 posts |
You playing on words here. |
Colin (478) 2433 posts |
I said this once and it is a statement of fact. My response on this subject was a response to a remark Jeffrey made off the cuff. It is a remark I have seen often over the years for many projects. My comment has nothing to do with UnixLib GCC or any program or group of people. The only point that I make is that anyone has the right to do anything within the licence of the program and should not be bullied into forfeiting their rights. |
Rick Murray (539) 13850 posts |
I am a user of the DDE and it bothers me greatly 1 that the more recent versions of GCC are moving to ELF for the libraries and the executables. I do not really understand why using AOF/ALF/AIF is such a hardship – surely the code already exists and it just needs to be inserted in the correct places, as it would, for instance, to make Windows EXEs as opposed to ELF, or even a binary dump for embedded devices / FlashROM. I am fully aware that the GCC licence would permit me to make a fork of their work and to release it and there is nothing the original developers could do provided the release is within the stated terms of the licence. So, if you have the ability and you have the time, why not get in touch with the GCC developers and offer to help? I do not believe the delay is anything malicious, it is probably a combination of the usual issues that affect RISC OS – time and ability. You remember my long discussion about a Wimp that would work exclusively in Unicode and fall back to Latin1 for older apps? If I had time and ability, you’d be running it by now. I’m sure there are plenty of other examples of the great stuff that could be done with a little bit of time and ability. ;-) If you can fork, you can be much more productive helping with the original rather than discussing licences and then going off in your own direction. Offer to help… 1 Within context…whether or not the cat has been fed is more important to me… |
Colin (478) 2433 posts |
I seem unable to get my point across. I tried – I’ll leave it there. |
Rick Murray (539) 13850 posts |
<rant> |
Steve Drain (222) 1620 posts |
@Fred
Thanks. I will probably include the two varieties on the same page. @Rick
Thanks for the pointer to the Keil site. I have been there before, but had forgotten how to find it again. ;-( Given a fair wind, I will have a new version up shortly. ;-) |
Steve Pampling (1551) 8172 posts |
No, I was actually pointing out that there is more than one way of contributing to a project and that if someone’s expertise lies in the wordsmith arena then they can work on documentation updates and still contribute. From each according to their ability… |
jim lesurf (2082) 1438 posts |
Alas, as far as I can understand what people are debating, none of the above gets me further WRT my interest in having RO machines that actually cause existing programs I’ve compiled from ‘C’ using the ROOL compiler to access and use real floating point hardware. In essence I want an FPE / Clibrary module that “just does this” without the user or programmer having to do anything special for it to occur when FP hardware is present. For a while in the far past, this worked. But not since the relevant machines went out of use. Seems crazy to me that an A5000 could do something no modern RO box can do! Jim |
Pages: 1 2 3 4 5 6 7 8 9 10 11 12