Cross-Platform ARM App?
Andy Burgess (1662) 14 posts |
Hi there, I’ve just developed my first Android app (using Eclipse on the Mac) and found it a bit unwieldy in places. I know Android uses ARM processors, so with my interest in RISC OS development it posed a question for me : Is it possible to develop an App in BBCBasic, RISCOS C/C++, or Assembly that would work on any ARM device – e.g. Android Phone, Raspberry Pi and – dare I say it – Windows RT etc, regardless of OS ? I suspect that Android won’t have the same high-level calls to the OS as in RISC OS (so BBCBASIC and C are probably out), but surely the low-level – i.e. Assembly commands would be the same – regardless of platform – or would they? i.e could an ARM assembled “app” be produced on a Ras Pi under RISC OS, and be copied (or recompiled) to run on an Android phone? Really interested in knowing if this is possible. Cheers |
Rick Murray (539) 13840 posts |
Yes and No. if you write in pure assembler (does Android cope well with that instead of Dalvik code?), you can do it under RISC OS with the caveat that you will need to use libraries specific to Android, and be aware that SWI xxxxx does one thing under RISC OS and likely something else under Android (IIRC the SWI number is passed in a register, not an instruction). As for C, it is more complex. Not only do you need to link with Android libraries, you need to ensure the calling conventions match up (which register is stack pointer, frame, which can be trashed, are all registers passed on stack or is a0-a3 then stack, etc etc etc). |
nemo (145) 2546 posts |
Simple answer: Yes, as long as you don’t need any input or output of any kind! ;-) Sadly as soon as you do (and let’s face it, you do) then you would require an abstraction that works across all your target platforms. This is a problem that is as old as computing, and has led to solutions such as POSIX, which is Unix/Linux, basically. Having written an Android app in addition to RISC OS code you’ll be aware of the very different high-level idioms. Different OSes just have very different ways of doing things, and abstracting them all usually ends up with a very limited subset… and not a very attractive one. Hence all those hand-me-down command line utilities RISC OS inherits from Unux. In any case, targeting a single processor flavour (as RISC OS does) is so last century as Linux, Android and even Windows 8 demonstrate. And the power of modern hardware means there’s now almost no excuse for writing code at such a low level. My advice these days is to learn Javascript. No, really. Learn Javascript. Properly learn it. Don’t just look at some and think it looks like C/C++/C#/Java and you understand it already. You don’t. Don’t think it’s only any use for animating buttons on websites. It isn’t. My considered professional opinion is that all programmers should learn Javascript. If only to avoid looking like a fool. Anyway… there are environments such as Qt (“cute”) which target multiple platforms so you can write code once and deploy it on Macs, PCs, iOS and Android. Sadly there’s no RISC OS version! The other thing that has crossed my mind is that Android’s kernel is linux… but it doesn’t have to be. Android-on-RISC-OS is possible (though probably not achievable) and would be pretty attractive. Imagine being able to run Android apps alongside Wimp programs. Windows 8 done properly! |
Andy Burgess (1662) 14 posts |
Thanks for this clarification. I suspected it wouldn’t be easy! Nemo: You say “learn Javascript”. I already have! I do a lot of websites (PHP / ASP (yuk)) and have had to learn Javascript to do some of the client-side things, and have become proficient in it. I’ve heard of dynamic-Javascript for reading databases etc, and have tinkered with QT and JQueryMobile (as well as MonoDevelop and Java and others). These environments are all well and good for Mac/PC/Linux environments, but I’d love to be able to cater for RISC OS as well; but of course web-based applications should still work on RISC OS, shouldn’t they? NetSurf is the default browser nowadays? My Android app makes use of a barcode reader (ZXing) which was fairly easy to incorporate into my app, but I believe that would be non-trivial to do with a one-size-fits-all package like QT or JQueryMobile. What are people’s feelings about Java apps? Could I create a standalone app (i.e. not necessarily with web access) that would work on RISC OS too? I’ve heard of issues with “security” flaws with Java – so that’s made me reluctant to use it. |
nemo (145) 2546 posts |
I agree with the “should”. I’m not confident about “do”.
|
nemo (145) 2546 posts |
Indeed. Android-on-RISC-OS would be awfully attractive for exactly that reason! |
Rick Murray (539) 13840 posts |
Yeah, people go on and on about how “portable” C is, but the fact is the portable part is extremely restricted (you can open files, print stuff, get simple input) but you can’t list files in a directory and you are off-limits with anything smarter than a basic unformatted line-by-line output (no VT codes, no ANSI colours, definitely no GUI). As soon as you need any of that, the program is no longer portable. Then there’s another lurking “gotcha”. I write some image fiddling code. It used “int this;” and “int that;”. Worked a treat under RISC OS. Failed horribly under DOS, for a DOS int is half the size of a RISC OS int. Needed to make ‘em all “long” for it to work [and then there’s that memory model bull that used to plague the DOS compiler days, thankfully _far and _near were rejected from inclusion into the C standard…). IMHO, “portability” is a myth unless you are coding for something that isn’t real that can be implemented on various different platforms. This being a basic description of Java. And even then…
And is very heavily dependent upon aspects of how Unix works (the ‘/’ root directory; threading and spawning child processes; etc).
Probably simpler than trying to port an x-windows application…
Different methodology. Android doesn’t count, it is pretty much (heavily) bastardised Linux. Linux itself seems to be written with the design principle of the least amount of assembler necessary, and that’s pretty much glue to do the low-level fiddling that you can’t manage in C… though as Canonical showed when they dropped ARMv6 in favour of ARMv7 optimisations, it doesn’t necessarily hold true that something written high level will support all sorts of processors. For sure it is magnitudes simpler to port (RISC OS on x86? ain’t never happ’nin) but after all of this, somebody somewhere needs to support your processor.
RISC OS normal not-specially-optimised-boot on a Pi is sixteen seconds. You don’t need to take my word for it, I videoed it: http://www.youtube.com/watch?v=idN0Cph1hh8
No thank you. There are too many versions with too many stupid little quirks. Wanna see? Take the UserAgent string for a Webkit browser and paste it into the UserAgent for Firefox and see how many sites break – Facebook, dead (yay!), Flickr dead, GoogleMaps fail. Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6 Why is this happening? The server is reading the UA to determine which flavour of JavaScript actually works on your browser, and sending it appropriately. As you are lying about your browser, you get the wrong scripts, and it all stops working.
That’s about my level of understanding. Sufficient to drop some stuff into my site, and that’s that.
Oh, it can do a lot more (GoogleMaps is some amazing code), but in a fully platform independent manner it’s pretty much only useful for animating buttons on websites… This doesn’t mean I “don’t know” JavaScript, it means that I never bothered with the nitty-gritty cool Ajaxy-Whoo-Hoo details as I don’t fancy the idea of arbitrarily restricting who can make use of my site based upon their choice of browser (and since I’m kind of lazy at times, I baulk at doing the same thing multiple times because people can’t agree on stuff).
Is this like “you are supposed to learn Pascal”? I get it, Pascal drags you kicking and screaming from BASIC to the idea of typed variables, reusable code libraries, public/private functions, etc etc. However having learned Pascal and swiped a copy of TurboPascal from the college computer (latterly, it was a freebie on Borland’s site for a nostalgia kick), I don’t think I really ever did anything useful with it. It was just a “oh yeah, I remember something like this” while teaching myself C. It should have been: Stuff Pascal, learn C.
Never troubled me in the past… ;-)
I wonder if Qt could be ported to RISC OS in some manner? I don’t know about how well Windows supports the Qt runtime (as in, is it 100% complete?), however I feel the need to point out… latter-day MacOS, PCs (with Unixen), iOS, Android – they’re all basically variations of the same core system. The only difference would be PCs with Windows, but you didn’t specify.
No, Android’s kernel probably does have to be Linux. If you were to change the kernel, you’d be changing the heart of the OS and the result wouldn’t be Android… The apps are (typically) a higher level using the Dalvik (Java) runtime. |
Rick Murray (539) 13840 posts |
I got sick of reading ElReg’s reports of The Next Big Java Problem (here’s an example) alongside Firefox saying that one of my add-ons may pose a risk so it has been disabled… This is talking about a Windows PC. |
nemo (145) 2546 posts |
Post hoc ergo propter hoc. Linux isn’t slower to boot because it is written in C.
No. You’ve just been lamenting the differences between compilers harming C’s platform independence. Now you cite historical differences in browser implementations as being JavaScript’s fault. They aren’t. In any case, the days of unilateral proliferation are over – all browser developers are working towards the same standard, and that will be called “Harmony” as it happens. I’m quite serious about advising programmers to learn JavaScript, especially people such as yourself who are prejudiced against the language for spurious reasons. You should learn the language, you will learn some really cool stuff (about JS, and about programming). Douglas Crockford’s “Javascript: The Good Parts” should be on your shelf already. If not, put it on your Christmas list. Don’t just look at some and think it looks like C/C++/C#/Java and you understand it already. <facepalm> What did I just say? This is precisely the problem. I would put money on you not actually understanding the language you’ve used. How can you judge it if you don’t understand it?
It’s open source, I don’t see why not.
Yes.
Oh bollocks. It absolutely doesn’t. How do you think the Windows Android SDK works?! Also, BlueStacks: http://bluestacks.com/app-player.html |
nemo (145) 2546 posts |
Actually Rik, bollocks to your “Linux boots slowly” comment too. Linux booting in a virtual PC implemented in Javascript running in a virtual machine in your browser: http://bellard.org/jslinux – boots in 6.7s on my ancient XP machine. As I say, writing code at the processor level seems somewhat outdated for the majority of usage cases. |
Steve Pampling (1551) 8170 posts |
It would depend on the distro and the tat stuffed in the boot I suppose. |
Rick Murray (539) 13840 posts |
That’s not “Android”, that’s an Android “app”, which – akin to Java to which it is related – will function on any suitable runtime. As I said, RIM’s PlayBook (as in “Blackberry”) is planning a runtime for Android apps.
Are you talking here about Android itself or the “apps”? You said in the previous message:
So forgive me if I (incorrectly?) assumed you were talking about the operating system itself – which when connected via
I grew up in a time when people used to write stuff in C and use hand-optimised assembler for bits that needed speed. This isn’t to say that compiled C is inherently bad, however there are some tricks even the compilers don’t know about that can be employed to get the maximum results from a slow processor. Perhaps the boot lethargy is just bad practice? It is possible to strip a ~1 minute boot time down to ~15 seconds by employing a variety of tricks. Maybe some of them are things Linux should have done in the first place? I can give you a class example by pointing to my PVR which runs a stripped version of Debian & Qt4 on a 200MHz ARM926 (TMS320DM320). The people that built the thing made two versions of the firmware. The first was Debian and NanoX. It was small, compact, kinda plain but did its job. Then came the second version that sings, dances, and is Debian/Qt4. The CF slot on the front now needs to have a CF with ext filesystem always mounted because the Qt libraries blew a hole in the 16MiB flash. The OS boots from Flash, the Qt stuff in on CF. I find it to be horrific that a nice display and runtime support for what it, primarily, a video recorder consumes over 20MiB.
Historical ? I gave you a test to do if you felt so inclined to demonstrate that JavaScript is not cohesive even today. I can forgive the past – all kinds of crazy nonsense went down in The Browser Wars. But we’re still not there yet. As for talking about C’s platform independence, I was actually pointing out that it is a myth. Sure, you can cross-compile all sorts of simple trivial little things; but the fun and joy of porting is that as soon as you get to anything useful, this supposed “platform independence” evaporates like cheap wine spilt on a hot stove… While I’m kicking * sets the file position indicator for the stream pointed to by stream to * the beginning of the file. It is equivalent to * (void)fseek(stream, 0L, SEEK_SET) * except that the error indicator for the stream is also cleared. * Returns: no value. So not only is it the same as the fseek() that I use, but it returns no value and clears the error marker. Consequently, there is no way to determine that
I’ll believe that when I see it. As for Harmony – do you mean this ?
That’s pretty much a given, don’tcha think? It cut’n’pasted enough to do what I needed to do. Call it Cargo Cult Programming if you will. Though, I must say I’m quite impressed by the x86 emulator.
SDK? As in Software Development Kit ? Funny, there was me thinking it was a set of programs for the host OS and an emulator for testing the compiled software in the absence of an actual phone/tablet to push the code to. SDK != OS
If you wish to lower the tone of the conversation with words like “bollocks”, please at least be rude to the right person. I’m not Rik, I’m Rick.
That emulator is actually pretty cool. However, I will point out that your 6.7s statistic is meaningless as you don’t define what “ancient XP machine” is. Try this. eeePC901 (1.6GHz Atom), on Firefox 3.6.27 with numerous active tabs but no other applications running, connected to the outside world via Wifi .n (13Mbps) and an ADSL link that can run up to around 1.8Mbps. Booted in 107.934 s Welcome to JS/Linux It takes me longer than your 6.7s to get to “Starting Linux”; but then it does need to also pull in a 1.73MiB kernel and a 15KiB bootstrap. No idea about how much is loaded of the drive image… I suspect the lagginess here is the JS engine in this version of Firefox, rather than the system it is running on. I understand and agree that there are numerous benefits of writing stuff in high level languages. I also understand (and would agree) that there are times when using assembly is beneficial. I think the time for implementing new operating systems from the ground up in assembler have long gone, and in this respect RISC OS remains a pleasing oddity, a nod to a culture of people who knew how to program efficiently. To this end, it is perhaps no mistake that RISC OS seems more closely aligned with embedded systems than big fancy behemoths. In fact, I would be inclined to suggest that RISC OS could – if it attracts a good development atmosphere – be a rather unique bridge between the small embedded devices (frequently programmed in assembler unless it has some sort of runtime embedded for a C-like or BASIC-like method of programming) and big behemoths probably running a Linux variant. With RISC OS you can have services, features, filesystems, and pretty windows. But you can also have speed, hijack interrupt handling, replace or customise bits or swathes of the OS, as necessary. You mustn’t forget, we’re looking at an OS with a heritage back to the mid ’80s. Back then, huge chunks of assembler was not unheard of (SIBO, AmigaOS, MS-DOS…). Original Unix itself was first written in assembler, then rewritten in C. That an operating system from a long-gone ‘80s Micro manufacturer written in absurb piles of assembler is running, today, on the RaspberryPi and similar, well, just goes to show you that maybe this assertion isn’t quite as crazy as it might seem. The impossible has happened. It’s sitting on the table in front of me. I couldn’t have dreamed of a gigahertz ARM sporting a half-gigabyte pile of memory. Even in the RiscPC days, such things were unimaginable. Now I have it, I’m not entirely sure what to do with it. I mean, I could go “wheeeee!” and load up five thousand, one hundred, and eighty copies of !Edit – but that seems like a bit of a waste, don’t you think? RISC OS software has always been small, tight, and not megabytes of bloat, so it stands to reason that around 20MiB of the memory gets exercised regularly, while the remaining pile twiddles its thumbs, but not in a cute Rikka-Takanashi-in-the-opening-credits kind of way… Looks like each way has its benefits, and its problems. |
Malcolm Hussain-Gambles (1596) 811 posts |
Just two questions… 1. Is there javascript support on RISC OS? If not, how would either of these be portable to RISC OS? Python is OK, and fairly portable. Just look at firefox for a reason not to use javascript. FYI Windows NT supported multiple platforms, IMHO supporting multiple platforms on a desktop OS is an insanely bad idea. |
nemo (145) 2546 posts |
My point is that apps are compiled to a virtual machine, not a particular processor architecture. Indeed, but considering the performance of such machines, the distinction between the two diminishes – despite Moore’s Law our hearing doesn’t get any finer, so when it comes to sound generation what used to be written in tight MC can now be generated in a high-level scripting language. there are some tricks even the compilers don’t know about that can be employed to get the maximum results from a slow processor. I fairly recently hand-optimised an ARM blitting routine to improve throughput by 33% over the compiler-generated code (and this is high resolution stuff). Difference in performance for the whole system? Not measurable. As soon as you involve IO or process swaps (which are orders of magnitude slower than your “glue” code) such ‘tricks’ are meaningless, unless the system is utterly trivial.
Well, it could be argued that part of Linux’s problem is that it’s written by 20,000 people with different ideas of how things should work, so it’s a camel. Comparing RO boot time with Linux boot time is comparing apples with oranges (or crab apples and Ugli fruit, more accurately).
Quite so. Putting a server class OS in an appliance is an absurd but sadly economically-viable result of Moore’s Law… as is the observation that the hardware chosen to run the OS will be not quite powerful enough.
I wonder how many of your tests concern the language, and how many the various browser’s implementation of the DOM. I imagine more of the latter than the former.
http://wiki.ecmascript.org/doku.php?id=harmony:specification_drafts
Sorry Rick, that was a typo, and my colloquial phrases are jocular not desultory.
OK, same machine booting RISC OS 4 under VirtualRPC: 35s (though that’s to the desktop)
It’s at this point we have to quibble about what “efficient” means. In terms of shaving 33% off a loop, fine, aren’t we clever, but that also leads to monstrosities such as using a ‘non-standard shift’ in a comparison to preserve the carry and results in unmaintainable code. Plus, spending three times as long to write a loop that goes 33% faster once a fortnight is nobody’s definition of ‘efficient’. That RISC OS still has physical hardware to run on is a lucky coincidence. Writing a hand-tuned ARM version of a routine that also exists in a high-level language is fine. Writing stuff ONLY for one processor is madness (in terms of longevity).
In the real world, if Linux is too big, you probably want QNX or ThreadX. |
nemo (145) 2546 posts |
No. This is a bad thing.
No. This is a good thing.
A minimal JavaScript interpreter is about 100KB. A JIT compiling version can be much bigger (Google’s V8 is over 2MB).
Indeed, Python is great. The problem I see with python, ruby etc is that they aren’t just languages (and in fact aren’t very useful as languages) – they are language/library hybrids, and the libraries are MASSIVE. JavaScript doesn’t have a library, it has the internet.
Just look at Chrome for a reason to use javascript. I don’t see what citing one browser has to do with discussing the merits of a programming language.
MS have a long relationship with Intel. ARM limitations of Windows 8 are political (and marketing) rather than implicit.
That is Apple’s choice, and yet iOS is only supported on ARM. Again, political, not technical.
You seem to have forgotten about linux, conveniently. |
Steve Pampling (1551) 8170 posts |
As you say, Apple’s choice and more to the point rumour has it that the situation may well change soon. Something to do with increasing processing power in the ARM range and reducing code development variants. Not that ARM fans should celebrate – Apple move wherever their interests lie with no tinge of processor loyalty, which in commercial terms it understandable. |
Rick Murray (539) 13840 posts |
I’m not convinced that a scripting language would be able to do it well [and by this, I mean the scripting language generating the sounds, not just calling a bunch of prebuilt machine code library functions that does all the hard work…]; but certainly a Java app can do the sort of stuff that before required careful assembler. An example I’d point to being MilkyTracker for Android. Looks like crap on my phone (prob. better on a tablet!), but it’s a pretty good attempt at replicating the FastTracker MOD player.
That’s why I said that I believe that Linux being written in C will have an affect on system performance; and followed it up with but on today’s hardware, you won’t be able to tell.
Depends – if your blitter was for a game, all the data should have been loaded and I/O stuff like sound running under interrupt by the time your blit would get an exercise. Plus, no task swaps. In a normal use case, swapping process can be sped up with things such as lazy swapping (although the impact of that can be felt elsewhere as any Windows user can tell you) and I/O would work a lot better if it could run async on interrupt. For instance, should the machine freeze while spinning up and reading the root directory of an unknown floppy? No, it shouldn’t. You’d burn several hundred million instruction cycles saying “yo, flop, you done yet?!”. So if this was async and non-blocking, you could let the filesystem get back to you when it has an answer. 1 One could point out that if you can hand-optimise something by a third over the compiled code, imagine that scaled up the the entire Linux kernel!
Mmm… Could explain the forks. To each their own religion.
Moore’s Law, or more likely “it has been proven to work and it is free to use”. As absurd as stuffing a server class OS into an embedded device is, surely writing your own OS from scratch when something usable already exists is even more absurd?
Well, the claims in the datasheet that the device can manage recording and playback of D1 are a little hokey at best. It is my personal suspicion that the DM320 is capable of handling D1 (that’s full frame PAL or NTSC) video, but the moment you add an OS to tie it all together, the D1 abilities vanish ‘cos there just isn’t enough headroom.
Hehe… I didn’t mean when I see the spec, I meant when I see working implementations that certain groups don’t try to screw up for their own protectionist aims.
Well, VRPC is mind-numbingly quick anyway. But hey, you just made my point. RISC OS is incredibly quick to get going, even when it’s running on a PC! <stir><stir><stir>
I would say making the best use of the available resources for the most streamlined result. Oh my God, that sounds like something you’d see on a Powerpoint. Gah!
The irony is, when such optimisation isn’t being performed by necessity (like a full screen real-time flight sim on an 8MHz computer), it is often done “just because”. Most of my optimisations have little purpose other than “because I can” and “because I feel the overheads of meh are too much” (using a highly arbitrary yardstick). I tend to like to write my own jump tables because I do it better than any compiler I’ve looked at the code of. The DDE compiler (Castle generation) wasn’t bad, but wasn’t there yet. It branched to a set of branches to the routines (so close to perfection…!). You don’t wanna know what TurboC++ (16 bit world) and OpenWatcom (32 bit world) did. I suspect x86 code is just grotty however which way you look at it.
Hadn’t we all learned our lesson when all the cool 6502 NMOS side-effects failed when the CMOS version of the chip was released and turned ’em all into NOPs? Looking at the “compiler uses unaligned load to load 16 bit value in two instructions”2, I suspect we’re just doomed to repeat our mistakes. 2 The galling thing is that an unaligned load needs a load followed by a mask or shift to rotate the data into the correct place.
Your blit routine is only called once a fortnight?! Jeez, you probably could have just SpriteOp’d it!
I suspect there’s more of that than might be immediately obvious – why is (true) Windows only available on x86 hardware? If it was all high level code and some glue, it wouldn’t be too hard to recompile it for ARM or MIPS or whatever, world domination by having Windows7 run on loads of stuff. Word, Excel, etc could come with binaries for various different types of machine (.EXE for x86, .arm for ARM, .mip for MIPS, etc). Granted, a lot of this is political too; however with Windows 8 on ARM not being like Windows 8 on a PC; either the two versions of Windows are inherently different for technical reasons (like lots of x86 specific stuff) or Microsoft are shooting themselves in the foot. Score one for the HLL OS. On the other hand, as I said in my previous post, RISC OS was written in the day when it wasn’t unusual to wrote OSs in assembler. By doing so, plus the somewhat unique benefit of the OS being written by the guys that designed the processor, the system spent some time kicking the ass of all of the other so-called home micros. Well, Acorn was on top for a short while. Some never made it that far (hello Tangerine Computer Systems, I’m looking at you).
You don’t think, with some revision and tidying up, that RISC OS would have a place in the embedded world too? |
Rick Murray (539) 13840 posts |
It’s been a while since I worked with C. Too long spent enjoying the quirks of PHP and VisualBasic. So I’m writing a configuration file loader having read Justin Fletcher’s guide twice (read from Choices:blah and write to <Choices$Write>.blah – why do some people have such difficulty here?) and it keeps failing when running the verification test to make sure the file being loaded is correct. Then I look up the description of
Reads at most one less than the number of bytes specified? That does it. The original specification for the standard C library was written by a drunken monkey and ain’t nobody’s gonna convince me otherwise. Gah. Stupid! Stupid! Stupid!
|
David Gee (1833) 268 posts |
Presumably because you give n as the size of the char array, and reading a maximum of 1 less than this allows for the terminating null character? |