Linux Port
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 ... 20
Timothy Baldwin (184) 242 posts |
I’ve uploaded source for a fix for random crashes in ptrace mode, I was setting up the alternative signal stack in the wrong thread! And binary |
Timothy Baldwin (184) 242 posts |
That’s normal for RISC OS, the default size is zero. Tristian, the bug you encountered on the RPi3 is a RISC OS classic, ShrinkWrap allocates a dynamic area of “unlimited size”, by default this means the apparent size of RAM; a fix was introduced in RISC OS 3.8, but part of it is in !Boot. I have removed ShrinkWrap. You will also need a fixed RISC OS binary. A core would be helpful for crash you are getting on aarch64, enable with “ulimit -c unlimited”, compress with “tar —sparse —xz -cf core.tar.xz core”, then email it to me. It isn’t a plain old invalid memory access as those become RISC OS errors, use the —noaborts option to get a core dump from those. |
Tristan M. (2946) 1039 posts |
I grabbed the latest CVS source a little while ago, plopped the RISC_OS binary in, set the ulimit and went for verbose just because. And it’s stuck.
It’s been sitting there for 15 minutes. I’m going to let it go for a while because I don’t know if it is actually doing anything besides thrashing. Kudos on the multithreading by the way. It’s got system load pegged at 100% on all four cores. |
Timothy Baldwin (184) 242 posts |
Tristian, it’s not that slow. And what computer is this again? It should not use all four cores at this stage, and then only if asked. So is it behaving randomly? |
Kuemmel (439) 384 posts |
The speed up those Cortex A-57/A-72 boards are offering for Risc OS here is quite enormous, Linux doesen’t seem to slow down the whole thing…just by now I understand that VFP/NEON is not supported/crashes if used by any software under Risc OS. Is that an issue to be solved easily soon ? |
Timothy Baldwin (184) 242 posts |
VFP/NEON instructions should just work, however to be compatible there needs an implementation of VFPSupport to save/restore registers as requested and convert SIGFPE to RISC OS errors. One detail to consider is should these exception be passed though the “hardware” vectors? For me the the real time clock and filing system is more important. OS_PlatformFeatures also needs some attention. As for performance SWI instructions are slow, a 900% overhead for OS_LeaveOS, and were accounting for half the time of a RISC OS build; they might be faster if the BPF JIT compiler is enabled. Additionally Linux has no implementation of a full OS_SynchoniseCodeAreas. |
Kuemmel (439) 384 posts |
…as I don’t have the Linux port running by myself, it’s Chris Gransden who told me that my NEON/VFP Basic examples I coded don’t run. @Chris: May be it helps when you show the error message here ? |
Jon Abbott (1421) 2641 posts |
Is that 900% overhead due to CPU paravirtualization? SWI such as OS_LeaveOS and OS_EnterOS need replacing with Hypercalls, which handle the CPU Paravirtualization and avoid the SWI vector and its associated overhead. I presume that’s the cause of the performance hit? OS_SynchroniseCodeAreas is an SWI you’d completely replace, particularly if there’s a JIT involved as you’d need to clean relevant JIT’d code, prior to cleaning the caches. As its considered a slow SWI, this could be done at SWI dispatch instead of a Hypercall. |
Timothy Baldwin (184) 242 posts |
@Jon: The JIT in question converts Berkeley Packet Filter (BPF) bytecode to (ARM) machine code and is an alternative to an interpreter written in C. A sequence of 6 Berkeley Packet Filter instructions is used to separate Linux system calls from RISC OS SWI calls based upon the address of SWI instruction. The sequence is like:
|
Timothy Baldwin (184) 242 posts |
@Kuemmel, Jan posted earlier in this thread:
They do however work if I edit out the calls to VFPSupport_CreateContext and VFPSupport_DestroyContext. |
Kuemmel (439) 384 posts |
Great news, this makes me considering getting one of those A57/A72 Chromebooks ! I forgot about Jan’s post. Can you send the results of all the 4 Frac applications ? What’s your CPU and MHz ? |
Timothy Baldwin (184) 242 posts |
----------------------------------- 32 Bit Fixed Point Integer Fractal using 64-Bit SMULL by M.Kuebel ----------------------------------- Time taken in [s]............: 1.6 Total Iterations.............: 179217040 Million Iterations per second: 112.01 --------------------- NEON Single Precision Fractal by M.Kuebel --------------------- Time [s].....................: 0.51 Iterations...................: 177944574 Million Iterations per second: 348.91 --------------------- VFP Double Precision Fractal by M.Kuebel --------------------- Time [s].....................: 1.85 Iterations...................: 177936814 Million Iterations per second: 96.182 --------------------- VFP Single Precision Fractal by M.Kuebel --------------------- Time [s].....................: 1.85 Iterations...................: 177944574 Million Iterations per second: 96.186 /proc/cpuinfo: processor : 0 I belive it’s a Rockchip RK3288 containing 4x Cortex A17 at 1.7GHz. |
Kuemmel (439) 384 posts |
Thanks. Nice, similar results like a Cortex A15, so it seems there’s almost no overhead from the Linux layer :-) We’ll hopefully see how the A72 from Chris is overshooting those results, which is basically two steps of evolution above from the A15. Update, here we go, thanks Chris ! => Done on a Cortex A72 at 2.11 GHz. Quite a step. Compared to the A15 the gain normalized clock by clock is about +20% for Neon and +45% for single and double precision. Integer based FixFrac stays the same, as expected, as they didn’t improve the integer core or at least doesn’t show of in that specific benchmark. I also let him test my sorting benchmarks, those show a lot of improvement also, I guess mostly due to the much faster memory access or cache system.
|
Tristan M. (2946) 1039 posts |
Hi sorry. It’s an Orange Pi PC2. AllWinner H5 (Cortex A53) running a nightly mainline Armbian kernel. Pretty much the same guts as a Pine64. Just different peripherals glued on I think. I tried again briefly last night. Updated the source, updated Armbian. Still segfaults if I leave it as a standard make check. This is what happens first time round on aarch64.
The “memory mapping failed” error is interesting. If I do it, then it segfaults the next time around. Even setting the value to 0 it still segfaults in the same place. It’s the same place as in the pastes on pastebin I posted. |
Timothy Baldwin (184) 242 posts |
Tristian, I think I have found the the bug, in Linux. sigaltstack is specified as allowing stacks of MINSIGSTKSZ bytes. MINSIGSTKSZ is 2048 for 32-bit ARM and 5120 for 64-bit ARM, 64-bit Linux incorrectly checks the provided stack against the 64-bit value of MINSIGSTKSZ for both the 32-bit and 64-bit versions of sigaltstack. This causes a double segmentation fault error when Linux tries the save state using the current value of R13 when an SWI is called with R13 not pointing at a usable stack. As a workaround I have increased the size of the stack, I have uploaded new source and binary. If you update the source it should build OK. I will now be distributing binary in a separate git repository which is linked to by the source as a submodule. It will download automatically the first time, to update to my selected version run “git submodule update”. |
Tristan M. (2946) 1039 posts |
Haha of course it is. I’m amazed you managed to find that somehow! Today I had a bit of a compilation adventure. I put Armbian mainline on an SD card for the Orange Pi PC (H3 SoC) and set it up essentially like my Armbian H5 setup including swapping over the USB HDD. You’ll be pleased to know RISC OS compiles fine. When I get to it, I’ll plug the HDD back into the Orange Pi PC2 and give it a try. Long story short I did some rearranging and need another 2A+ USB power adaptor. Actually put some effort into fixing the file server so it’s not always causing me grief and falling over under heavy load. I’m looking forward to watching this project continue. It’s amazing! |
Tristan M. (2946) 1039 posts |
aarch64 still gets stuck during build. RISC_OS process on 100% been on it for 12 minutes so far. e: I just let it run to see what happened. It seems some time tonight it died. Not even on the network. Whoops. |
Timothy Baldwin (184) 242 posts |
Tristan, can you reproduce this? It might be related to this:
The Raspberry Pi 3 is the only ARMv8 computer RISC OS runs on. Can anyone else build commit 81387339f85be9babf45890608044d5547c93897 on ARMv8? |
Timothy Baldwin (184) 242 posts |
Tristan, try the ARMv8 branch, I have recompiled the programs used in build system the latest RISC OS GCC. |
Tristan M. (2946) 1039 posts |
By reproduce you mean it getting stuck forever at the GNU.diff bit? Every single time. Completely reproducible. Not sure I follow what the GCCSDK gcc release has to do with it. Anyway a couple of days ago I rebuilt GCCSDK from cvs. I built gcc from that for my RPi Zero. I know the RISC OS bootstrap binary works and the build completes on the Orange Pi PC (H3) because I did that. I also know that the bootstrap binary also works on the OPi PC2. You can see it in action on the latest paste I linked. Plus I was aimlessly messing around in it a few minutes ago after running it directly. However the build jams up for some reason flogging the CPU at 100% I just remembered something. When I was working on rogpio I was testing on the RPi Zero and RPi3. I was using gcc for that and it was before xmas. |
Timothy Baldwin (184) 242 posts |
I refer you to commit r7043 from 2016-03-11 07:53:01. Before that Unixlib they were many ways to execute a SWP instruction in Unixlib, which since at least one is in the error handling code, will result in a infinite loop if run on a system that does not support SWP instructions. To confuse the matter Linux has a compile time optional emulation of SWP instructions. |
Tristan M. (2946) 1039 posts |
Haven’t had much luck. Tried the rpcemu route on x86-64 and aarch64. x86-64 downloads rpcemu, builds it etc. Loads rpcemu which flashes up the splash screen, then says: aarch64 is a different matter. I’ve invented the most inefficient means to build RISC OS ever! This is a rare occasion when I have the PC plugged in. I’m accessing my aarch64 build server via ssh. This session happened to have X tunnelling because of something I was doing earlier. So without thinking I have the PC just sitting here, with rpcem running on the Orange Pi PC2, displaying the build process via the remotely running, locally displayed rpcemu. It;s running at an average of 11.3 MIPS. Oh dear. However it is building. Albeit very slowly. I hope it completes. So… why is aarch64 building with rpcemu whereas x86-64 isn’t? both are using the same version of the repository? Curiouser and curiouser. |
Timothy Baldwin (184) 242 posts |
I can reproduce this, it would appear you previously attempted to build it with Linux port and did not clean the source tree. IXFS uses POSIX extended attributes to represent filetypes, whilst RPCEmu uses the traditional extension method, as such the srcbuild binary written by IXFS has the filetype of “Text” in RPCEmu. The traditional extension method however makes a mess of filename when viewed outside RISC OS. |
Tristan M. (2946) 1039 posts |
I did clean the source tree. After seeing your post I even deleted the tree and re cloned it. I tried a normal build instead of rpcemu and saw the same exact error in the terminal before the build failed. |
Timothy Baldwin (184) 242 posts |
Currently Linux environment variables not beginning with However on second thoughts dropping the Any opinions? |
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 ... 20