Programming in Assembly language in 2023: is it worth it? What do you think?
Patrick M (2888) 115 posts |
Hello, I’ve had a question on my mind, and I thought it’d be nice to ask you all and see what you think, I always love to read about the thoughts and experiences of the RISC OS user and developer community. Does it make sense to do assembly language programming (in general sense, not necessarily just in a RISC OS context) in the modern era? Vs other languages, such as C, C++, BASIC, Python etc. I’d love to discuss this and hear what you think. |
Rick Murray (539) 13850 posts | |
Chris Hall (132) 3558 posts |
Programming in assembly language should be confined to time-critical segments of a particular application. A good example is !CountDn where you need to know, as soon as the six tiles have been chosen, which of the 900 possible targets are easy and which are difficult (so that the user can choose a random ‘Easy’ or ‘Difficult’ target). That takes 24 million calculations of the form A+B+C+D+E+F where the arithmetical operator is one of the six possible (plus, minus, multiply, divide, reverse sub,reverse divide). That takes 30s in BASIC or 80ms in assembler (see Sept 2016 WROCC magazine). |
Chris Mahoney (1684) 2165 posts |
Even if I had to use assembly (such as the time-critical example above), I’d still try to let the compiler do as much as possible. For example, I believe that Norcroft C’s inline assembler allows you to use variable names instead of register numbers, etc. Despite that, I’ve yet to run into a situation where I actually need it, especially with modern (fast) hardware. With the right compiler and linker settings you can even write a simple operating system entirely in C. I’ve actually done this, and it runs on Intel and Arm. I see no reason why I couldn’t compile it for e.g. RISC-V or PowerPC if I wanted to, and I don’t expect to have to make any code changes in order to do so. |
David Lamda (9487) 48 posts |
I was wondering how I could share this interesting article that I was reading last night on here: |
David Lamda (9487) 48 posts |
From perhaps a “common” sense approach if I wanted to do something in Assembly language I’d probably think to myself well maybe if I rewrote that bit in Assembler I may be wrongly assuming that the higher level code I was translating was as good as it could be. I’d revisit that code and rewrite the higher level language. Masking problems is not good. If you’re talking pure maths or number crunching then Assembly Language back in the day (if you could do a better job than a C complier?). But back in the day there was quite s strong argument for BASIC snd Assembly and not everyone just did C. I never used to have to do Assembly Language on an Acorn Electron I could always write something quicker in BASIC than my friends could do in Assembly but that was a very long time ago and the cheats/secrets that I employed, these weren’t very widely known. To be honest I wish I’d learnt 6502 or later C. But at the time I’d left school I was saying nah BASICs the future. Then VB for Windows came out and the rest is history. If I was starting out now and had any believe that I was clever enough to be a proper programmer, there’s two pieces of advice I’d give. 1. Learn C (not C++) 2. Learn Python and suggest the rest like JAVA can be learnt quickly after mastering 1. and 2. Erm, my primary school friend that taught himself 6502 I believe went on to buy an Amiga and go to University and then, I heard, to design chips in silicon valley and I didn’t really do anything as vast and note worthy as him so please feel free to disregard my 2c. I believe I can probably do a really fast Space Invaders but is a completely unplayably fast game on modern hardware worth anything? Food for thought. |
Simon Willcocks (1499) 519 posts |
You’ve got to love this line from https://www.theregister.com/2020/06/05/moores_law_coding/ :
|
David Lamda (9487) 48 posts |
I read that to mean a reduced instruction set. But it gets complicated when they throw custom GPU chips in there. In a general sense maybe they should step back and simplify everything a bit like Steve and Sophie did, not just so they can make something that will work first time but maybe the different simplify approach with the direct intention of yielding more performant results. Straying too far off topic now sorry and it’s getting late. |
David J. Ruck (33) 1636 posts |
That wasn’t even true when I wrote my version of !CountDown back in the days of the 8MHz ARM2 Archimedes. Yes interpreted BASIC was way too slow, but early Norcroft C was able compute all the simple combinations (~500,000) easily within the time limit. These days a Pi 4B can calculate the full set of combinations including parenthesis (~22,000,000) in under a second using GNU C++. Hand optimised 32 bit ARM assembler is a few ms quicker, but ARMv8 or x86/64 is only a compile away rather than a rewrite from scratch. An interpreted language such as Python 3 takes 80 seconds which is still too slow, but changing one statement to make it multi-process (not under RISC OS of course) brings it down to 23 seconds (2 seconds on a 16 core AMD laptop). |
Stuart Swales (8827) 1357 posts |
The reason Arthur / RISC OS was written in assembler was that we didn’t have a compiler that could produce ROMable position-independent code for relocatable modules (or, indeed, modules at that time!). If we’d had a C or BCPL compiler that did what we needed, we’d have used that. It helped greatly that ARM assembler was a joy to use. I have only used tiny bits of assembler in applications for RISC OS and only where really needed. As David observes, a better algorithm always trumps hand optimising poor code. |
Rick Murray (539) 13850 posts |
This. I wrote a module, the first one I ever did (so it sucked) which used a crappy bubble sort for something. Two hundred items and all the memory shuffling involved took about twenty seconds in BASIC.
So, of course, the easiest solution is not to fix the compiler, but instead to write an entire OS from the ground up in assembler. ;)
I use the inline assembler a bit. But it’s usually for calling SWIs when I want something the libraries don’t do, but can’t be arsed to add yet another function to DeskLib. Once in a blue moon I do something in assembler “because I can”, but really, if you can’t get acceptable results from a processor clocking a gigahertz, it’s probably the code and not the language. |
Steve Pampling (1551) 8172 posts |
That’s probably the best explanation of Windows (and Linux in many ways) that I’ve seen. |
Rick Murray (539) 13850 posts |
The bleating masses don’t want efficiency, they want PowerPoint with animated whizzy effects and bouncing text. They want to write their CVs by inserting some text into a predefined template (unaware or unconcerned that all their CV will be indistinguishable from all the rest. And they want to have stuff install automatically when a disc is inserted rather than, you know, complicated “computer stuff”. Probably the only reason Clippy was so hated was because it was actually spectacularly useless. In this new AI Dawn, one could make a Clippy that could easily replace numerous “PA to the MD” roles… Plus, who needs efficiency when you can push sales of faster and bigger memory, faster and bigger harddiscs, bigger numbers all with the help of gold plated power cables to make sure that AC is extra shiny. |
Steve Pampling (1551) 8172 posts |
I suspect it might, unfortunately, be more intelligent – without the AI
I think that’s where you’re into the territory of the article, faster hardware every year has pretty much hit the buffers and wrecked the train. |
Paolo Fabio Zaino (28) 1882 posts |
IMHO, coding NEW software in ASM no. It should be avoided if possible (which is 99% of the cases). The need for it is very specific: OS bug fixes, compilers design/development/porting, some very specific performance improvements in certain aspects of compiled code. However, learning how to code in ASM (and how ASM and machines works), that is a very different story, and the answer to such a question is (and will always be) YES, it’s worth on RO and also on every other platform. With learning comes the need to be able to write ASM code, so in such a case it’s definitely well worth the effort. The last bit of where ASM would be needed is if you’re strictly coding only for the old platforms (these days called retrocoding), in which case all the limitations of the past still mostly apply and therefore ASM is still an handy choice. But on RO 5 for RPi, it’s definitely not needed (again, unless one is learning how it works and how to use it). Obviously, being able to read assembly code (and disassebled code), can also be extremely handy, for example figuring how certain parts of RO works (althought if I personally do not encourage just mere source reading, because that is missing the state of the system which would determine the actual behaviour at every given point), or when having to figure out how an portion of an executable (for which no sources are available) works. Hope this helps, |
Rick Murray (539) 13850 posts |
I think that’s a lost battle. People these days want a magic variable type “that just works” without any consideration to what’s actually going on inside the machine. After all, if it’s slow then the user can always upgrade to bigger/better/faster. It’s an approach that everybody else has used for decades… You know, Android Go edition (the cut-down version) is aimed at low-resource machines. You know what counts as low resources these days? A quad-core at around 1.5GHz, 16GB Flash, and 2GB RAM.
So long as you’re coding in BASIC. In C, our dev tools are kind of primitive, so the ability to read assembler can help with debugging. Like, do you see a fault being reported as two instructions after an
That ways lies madness. You might try to answer some “simple” (note the scare quotes) questions like:
and you’ll quickly discover that it’s about fifty-fifty. Fifty percent magic, and fifty percent madness. For some behaviours, Jeffrey has created flow charts of what’s going on. I’ve had to use Adobe Reader on my PC in order to break it into multiple-page poster prints just in order to get my 1200dpi laser to create something that can actually be read. The charts are that large and convoluted. I swear he could drop in a little box that says “Mornington Crescent” and it would probably be ages until anybody spotted it. ;)
If it’s not necessary to have an exact understanding, it’s often a lot quicker/easier to determine what the program is doing and then reproduce this behaviour with new code. |
Paolo Fabio Zaino (28) 1882 posts |
That depends on the field TBH. I mean, you’re right a lot of people just want a bunch of libraries/frameworks built in one of the programming languages they know and that’s it. But there are still fields like in my job (Cybersecurity), where understanding of assembly and machine language (and for both x86 and ARM at minimum), can help tremendously when analysing a lot of malware which (in many cases) is still written in ASM, C, C++ and now in Golang and Rust to name a few. |