Time for a Qemu port
David Feugey (2125) 2709 posts |
With the announce of the Pi5, and all the other ARM 64bits boards/servers/whatever, I wish an official Qemu port would be proposed on the ROOL site. Qemu, along with KVM, provides a way to run 32bit OSes on 64bit ARM systems (up to the Cortex-A78/Cortex-X1) at native speed (with some specific Virtio drivers). It has extensive features, including a Spice server (for remote access), fast graphics emulation (via Virtio GPU) or hostfs like system (via Virtiofs). Qemu can also run without KVM on almost every modern system on the planet, but without acceleration. RISC OS would benefit of much faster/better disk support (since the host will manage the NVMe compatibility), possibly much faster printing (by redirecting lpt1 to a file and let Linux/Windows print the Postscript data to a real printer, for example), and much faster everything, since Qemu devices run on separate threads (better for multicore). Nota: for hardcore purists, a combination of Linux+KVM+SDL/framebuffer+Qemu would permit to have – basically – a bare metal hypervisor suitable for RISC OS distribution on Cortex-A76+ processors. Else, you can also use ESXi for ARM, a(n) (almost) similar offer, but more server oriented, so with remote access only for the VMs. IMHO, even if a 64bit RISC OS is ready one day, it will need a 32bit layer for old applications. The best path is to use virtualization/emulation. And the best tool for this is Qemu (especially when you know it provides a user space emulator mode, a la Aemulor). So – whatever the option the community will choose – a Qemu port will be necessary and – IMHO – should be a top priority. Now, where is the bounty for that, or, if it’s not in the ROOL roadmap, where to send my money to support and accelerate this port? Nota: Qemu port is an opportunity. The opportunity to make RISC OS running full speed on more devices (including in the cloud). The opportunity to provide commercial drivers or tools for better speed or better integration with the host system. The opportunity too to distribute RISC OS software on every major OS. The opportunity to make some commercial tools to package RISC OS VM as Windows/MacOS/Linux installers. And, of course, the opportunity to reach a broader audience for RISC OS, and its native ports. |
Kevin (224) 322 posts |
How much work is involved? |
Theo Markettos (89) 919 posts |
I think this is a good plan. There is a QEMU Raspberry Pi emulation, and I got some way booting Pi ROMs on it. It would be possible to iron out the bugs in that, but I think a better long term plan would be to have RISC OS support one of the generic-ish Arm platforms QEMU supports, and then do all the I/O via VirtIO (drivers would need to be written). VirtIO is more efficient and generally free of the quirks of specific hardware like the Pi, and this makes drivers simpler and easier to write. I’d expect getting a basic HAL (timer, serial terminal, etc) up on a QEMU platform to be not much work, especially since you can instrument QEMU to see what’s going on (I used QEMU’s internal gdb to trace what RISC OS was doing, and you can also add tracing to the QEMU source code). Once a baseline RISC OS is up, then it’s a question of adding drivers. While VirtIO is an end goal for efficiency reasons, it might also be possible to re-use existing RISC OS drivers (eg QEMU can emulate an IDE hard drive so ADFS might talk to that with minimal changes). One thing that is useful for virtualisation is device tree support: RISC OS currently hardcodes where all the key hardware is for a given platform, but for a hypervisor this can change through configuration. Device tree is set up by the hypervisor to tell the OS where the hardware can be found at runtime. |
David Feugey (2125) 2709 posts |
I don’t know. What I know is that one french dev managed to get a working ROM (without sound) for old versions of Qemu, and without much efforts. And it was before Qemu did include a RPI emulation. So perhaps it’s not so difficult, even if it’s better to use a more generic QEMU ARM model with Virtio drivers, as stated by Theo. I insist on the fact that if this port could harm native versions of RISC OS, it will not kill this market (if it’s still a market). On the other hand, it can create some new ones. 1/ tools to get more integration (specific versions of Uniprint/Gemini/etc. paid support to get early access to new features…) 2/ a way to package and distribute RISC OS software on other platforms. For me who make the effort to code my tools (private projects for clients) for the Pi (RISC OS) and the PC (Windows), it’ll save me a lot of time and effort. One version and some installers to deploy Qemu + the required VM + integration tools. A RISC OS tool to automate the creation of these installers (macOS, Windows) would be a great idea of commercial software :) |
Clive Semmens (2335) 3276 posts |
If I could run !Draw and !Zap and my own RISCOS apps (written in BASIC – formerly with some embedded Assembler, but that’s all replaced with BASIC now, since it’s fast enough on the Pi for everything my apps do) on an emulator on an M1 Mac, I’d be a very happy man. But would my apps have the same WIMP functions available? Would I get the RISCOS desktop running in a window on the Mac? |
Rick Murray (539) 13839 posts |
I struggle to see how, given the latest Pi release is, as some of us suspected, no longer capable of running RISC OS natively. The writing is on the wall folks, so it’s either the insanity of a ground up rewrite that will be “similar but incompatible”, or the adoption of an emulation that’s not of a thirty year old machine…
If this means the ability to run RISC OS on anything that can run that build of QEMU, then that would be good. For the die-hards, will it work on a Pi 5? ;)
Might I suggest you try DrawPlus? http://www.keelhaul.me.uk/acorn/oss/ I find Vector is overkill, but DrawPlus, used it from way back, it is the Draw that Acorn should have released with RISC OS 3…
I hope so, wouldn’t be so useful otherwise. |
Theo Markettos (89) 919 posts |
TBH I’m not really seeing the advantages of RISC OS running on top of bare metal these days, apart from being a playpen for OS developers like me. Every new platform has a ton of new drivers to write just to get into parity with an existing platform. And that’s if we can even get documentation for the platform, which has always been a problem. Slipping underneath a ‘hypervisor’ that takes care of all the platform quirks and presents a more predictable hardware abstraction to RISC OS would mean RISC OS could run on any platform that supports that hypervisor (ie Linux+KVM). It could also improve performance, since the ‘hypervisor’ could use multiple cores for handling I/O where RISC OS can’t. From a commercial point of view, OEMs (CJE, RComp, etc) could still sell ‘RISC OS hardware’. It’s a box that boots into fullscreen RISC OS: call the hypervisor a ‘bootloader’ which is hidden away and it looks just like a RISC OS machine does today. This is actually what Windows now does: when you’re running Windows 11 you’re actually running a fullscreen Windows VM on top of HyperV – they use other hidden VMs for various security services, as well as WSL and WSA. Nobody notices because it looks just the same as Windows running bare metal. Windows 10 can run in both modes and it’s impossible to tell the difference. If you want to make a VRPC/RPCEmu style product where it runs in a window on top of Linux (or Windows/Mac) then it can also do that, but if you want a pure RISC OS experience then the host platform can be hidden away. Adding QEMU support would allow you to do either with the same codebase. |
Clive Semmens (2335) 3276 posts |
Again? I think you’ve suggested it to me a few times… 8~) …one day I might, but I’m very familiar with Draw, and it does what I ask of it. But yes, I might give DrawPlus a whirl 8~) Would I get the RISCOS desktop running in a window on the Mac? That’s pretty much what I thought. Just getting Draw, Zap and BBC BASIC wouldn’t cut it, but a fully functional RISCOS desktop…yes, definitely. When it runs faster than RISCOS on a Pi4 and lets me load and save stuff straight from and to the Mac’s own filers… VERY happy me. |
David J. Ruck (33) 1635 posts |
This sounds like a good idea to me, and David and Theo have made some very good points. I would like see a Linux VM running on the cores RISC OS isn’t using and the ability to do remote procedure calls between them. This could even go as far as allowing non native apps to appear on the RISC OS desktop with a some degree of integration. |
Steve Pampling (1551) 8170 posts |
Various questions: QEMU on Windows, or QEMU on Linux? If people are looking at developing a fully working, to standard-user level, QEMU with ARM guest then it seems to me there are two almost parallel development threads (with sub-threads)
|
Theo Markettos (89) 919 posts |
All of them. QEMU runs on Windows, Mac, Linux, FreeBSD, … a RISC OS ‘VM’ would be able to run anywhere. There is no such thing as a QEMU that only works on 64 bit ARM, you can compile it for x86, RISC-V, MIPS, PowerPC… It’s a ‘universal translator’ – you can run any ISA on any ISA. If they are the same ISA then it goes faster, but if not it still runs. Another nice thing about QEMU is UTM, which is a slick frontend for QEMU that runs on Mac and iOS/iPadOS (if you work around Apple’s restrictions). It’s possible to package up a VM into a one-click download/install/launch and it’s very smooth. On the question of which target, QEMU has a list of what they support. The only one there RISC OS currently supports is the Raspberry Pi , but my experience is this is not a good target. The QEMU implementation is enough to boot Linux, but in particular it has its own implementation of the GPU firmware interface – which is quite patchily implemented. Since this has changed over time, and RISC OS is fussy about exactly what iteration of the firmware it uses, this is awkward. You could fix it up by improving QEMU’s implementation, but then you have to maintain a fork of QEMU – I would argue this is counterproductive because users can’t just download it from standard repos/websites, you now have to offer builds for all platforms. If you upstream patches to QEMU they will take months or years to trickle down to those standard distributions. Another thing is that hacking on the Pi target needs extensive testing of both QEMU and RISC OS sides to ensure no regressions: a lot of people will be unhappy if changes in the source means their Pi no longer boots. The other option is the ‘virt’ target , which targets a hypothetical virtualised machine, using a lot of VirtIO. It would be possible to configure it at runtime to add non-VirtIO devices (eg IDE hard drive) to make it a bit more RISC OS friendly. However the challenge here is that the VM is given some ROM and RAM at boot time, and then all other devices must be located via Device Tree . DT gives you a piece of ROM containing a binary structure that describes what devices you have in your hardware and how they are configured. I wonder whether it’s time to revisit the current RISC OS HALs, which all seem to be written in assembler and hardcode the devices and their locations. A platform supporting device tree would need a DT parser in the HAL to set up devices (eg location and type of the UART) and be more modular (drivers for different kinds of UART selected at runtime). It would be nice if this could be C and not assembler. I suspect some stuff is going to have to remain hardcoded (is type of CPU is a compile-time option?), but some basic device tree support would be a good way down this track. I don’t know to what extent the device tree blob supplied by QEMU is in fact static, or whether it changes eg from one QEMU version to another. But maybe a short term fix would be to just make assumptions, at least in the interests of getting something basic working. They do have a feature where you can specify a specific version of the ‘virt’ implementation, so perhaps a chosen version + set of config flags will have a constant device map. That allows a basic implementation to be got up and running, then come back and revisit DT later on. |
Charles Ferguson (8243) 427 posts |
QEmu is a binary translating emulator. That means that it doesn’t execute instructions natively, but provides a translation to your current CPU architecture to execute the code it is running – it JIT-compiles your code. That in turn means that running your ARM executable on ARM hardware is going to be at least a few times slower than running the code natively – even a register load of a constant like MOV becomes an operation that loads the value and stores the value in the emulated register file. Consult the documentation on the ‘TCG’, or the code in QEmu, for details of how this works. This also means that memory accesses are more involved – a load of a memory location (LDR r0, [r1, #4], let’s say), needs to read the register file value for r1, add 4, then locate the memory location that this refers to (which, if you are using a system emulation requires a MMU lookup which itself might require multiple loads of different locations, not to mention the search for the correct page table entries), the load of the value, and the store in the register file. The decoding of translation blocks happens in chunks where the code can be known to be non-branching (or simple conditionals), but still, there can be a lot of work behind the scenes in the JIT’d code. QEmu virtualises hardware in a similar way to its normal memory accesses but can provide memory mapped I/O access (MMIO) to emulate hardware register access. That code can execute much faster than the emulated code because it is actually compiled for the architecture you’re running on. So you can end up with hardware accesses that happen much faster than the emulated environment. If you’re doing a full system emulation with a system like QEmu, you’re going to be a few times slower than executing natively because of this. Raising the use of QEmu as being more pertinent when faster hardware is available is reasonable. However, raising it because the hardware is an ARM system is missing the point – it doesn’t matter what hardware you’re using if you’re using QEmu. It will always be binary translating your code, so the fact that the emulated and host systems are the same architecture isn’t any benefit. QEmu also comes in a variant for lower-level virtualisation called KVM, which allows greater support for the execution of code using native execution. However, this requires the native system to be able to execute your code – and the Pi5 doesn’t support Aarch32 for non-user mode so privileged mode execution would not benefit from this. As the vast bulk of the OS by necessity executes in a privileged mode, the benefit here to the Pi5 is minimal. The fact that the Pi5 arrived isn’t of much benefit or incentive for a QEmu port. A QEmu port is probably far more useful for just about any other system – macOS, Linux, and Windows will all run QEmu and thus can benefit from such a system.
You have RPCEmu on macOS already, yes? QEmu would just be another emulated system. So… yes such a QEmu system would get you a RISC OS desktop running in a window on the Mac, but that’s the same situation you currently have with RPCEmu, so you haven’t gained that much. There are benefits in the fact that you’re not emulating RPC hardware but more native hardware, but it’s still a hardware emulator and things will look pretty similar to with RPCEmu.
Because the CPU and the hardware are emulated, the question of whether it’s ‘QEMU on Windows or Linux’ is kinda irrelevant. Once you have a working OS that will run within QEmu’s system emulation, the hardware it’s running on doesn’t matter. Everything that Theo said is sane :-) (not that other comments weren’t, but I’ve got nothing more to add here :-) ) A quick reminder of the work that already exists on emulation and extending RISC OS…
There’s also the recent WAsm work, which is very interesting, but it’s unclear to me where exactly it fits. |
Clive Semmens (2335) 3276 posts |
No, I don’t. Never bothered, because the Pi’s performance is much better. Am I right to assume that RPCEmu would have relatively very limited RAM, but that QEMU wouldn’t?
Would that include hardware that supports Aarch32 only in User Mode (such as Pi5)? |
Theo Markettos (89) 919 posts |
RPCEmu is limited to the 256MiB of the Risc PC (maybe 264MiB if you include 8MiB VRAM). QEMU can go as big as the target ARM CPU supports – could be terabytes. Also you aren’t limited to the ARMv3/v4 instructions of the Risc PC – VFP/NEON/crypto/etc instructions can be supported. |
Charles Ferguson (8243) 427 posts |
I would assume that the performance of QEmu on a Mac would pretty much match that of RPCEmu, as they both have to perform instruction emulation through the binary translation. As for the RAM, RPCEmu emulates a RiscPC, so those are the limitations it has. It could be taken in a different direction to emulate other hardware and to remove the limitations that are inherent in the system. QEmu still has limitations depending on the implementation. See Timothy’s notes in his documentation for how the memory is limited by QEmu. RPCEmu is a very good emulator; whilst it doesn’t include many of the things that are in the later CPUs, it’s still quite functional and could be used as the basis for further hardware work. The only reason that I didn’t use RPCEmu’s emulator as a basis for RISC OS Pyromaniac (and it had been the front-runner until then) was that I found Unicorn already had the abstracted interfaces and bindings for Python. If we’re talking about different directions to go, RPCEmu is not a terrible starting point.
You’d have to ask Timothy about that specific system; I’m only citing the information he’s provided on his site. |
Dave Higton (1515) 3525 posts |
One thing I discovered while working on IPP support is that the app that searches for IPP-enabled printers has to open a multicast socket, but emulated systems don’t seem to allow this because the same socket is already open in the host. Is this a soluble problem? |
Theo Markettos (89) 919 posts |
Depends on the networking config of the emulator. Some emulators (RPCEmu originally, QEMU, most hypervisors) have a mode where they appear as an additional machine on the network – do their own DHCP, get their own IP address. This may need admin rights on the host to set up this bridged mode, but once done they’re largely independent from what’s happening on the host. The socket-passthrough approach (VRPC, RPCEmu more recently) doesn’t need any specific configuration, but it’s limited by sharing the host network stack and thus its IP address. QEMU’s default network config is called SLIRP and effectively QEMU makes a virtual NAT router that the VM talks to for DHCP, DNS, etc. Outbound connections are forwarded to the host network stack, and you can add ‘port forwards’ from the outside into the virtual ‘LAN’. That limits what listening a VM can do on the real LAN, but means you don’t need any admin rights. It differs from the VRPC approach as the guest OS thinks it is sending network packets in the normal way (using the normal ethernet drivers), whereas VRPC dispenses with the network stack entirely and intercepts Socket_* SWIs (so is specific to RISC OS) – technically that would be called paravirtualisation. If you want to do the full separate network client with its own IP that’s bridged out to your real router without the virtual NAT, that’s also an option too. It’s up to you how you want to configure it. It’s a setting when you launch the VM; the VM itself doesn’t need to know whether it’s in SLIRP or bridged mode, it would use the same ethernet device but the packets would just go to a different router/DHCP server. |
Charles Ferguson (8243) 427 posts |
(Slight aside… using SO_REUSEPORT and SO_REUSEADDR with a listening multicast UDP socket is the way I would expect to allow you to provide such a service – but I think that requires all listeners on the port to do so; also binding to the addresses of the interfaces you are interested in rather than to the wildcard host is useful) It is soluble, but it depends on the emulating system as to how they implemented the networking. {VRPC} {Pyro1} The emulating system could provide a direct exposure of the host’s network interface. If you do this, then the emulated system’s network interface is constrained by the same privilege limitations as the invoking user. That might mean that you cannot bind to low ports, or that you cannot use the same ports as are open on the host. It may also mean that you cannot configure your IP address because that’s not something allowed by the host system. And similarly, you may not have direct access to the network devices to provide a DCI 4 driver. Such an implementation might not be able to bind to a multicast port if there was already someone there. {RPCEmu} Or the emulating system could do the above but limit you to only certain interfaces which it has bridged to the main interface. This gives you access to the same network device (at the DCI4 layer) and can let you use your own network address, distinct from the host. You only see your own address, and thus can bind as you wish to the ports because you’re the only client on that port. This requires more set up for the user to configure the system to forward packets from your address. Consider the networks created for qemu/kvm hosts which bridge to the host. This also means that you need to take care in the emulated system to convert bindings of The emulating system could provide an entirely emulated network stack which never actually touches your host network. This way you could bind ports any way you wish. But being entirely emulated within the system, it might never actually reach the devices you want to talk to. The network is functional but may not be useful :-) You might as well be talking to a wall… but at least you are talking, and that’s good for testing purposes. Or a variation of the above, which then NATs your requests on to the host’s external address. For outgoing requests it’d generally be fine, and for regular bound ports no problem (allowing for privileges), but for your case of multiple bound ports, you’d probably hit the same problem with reusable ports. This is how the QEmu Slirp internal network stack works. {Pyro2} You could provide an entirely separate network (ie network devices which are only ever used by your emulated system and others that attach to it) for your emulated system, where it talks to other parts systems within its ‘local network’, but cannot see the outside world. That’s similar to the above, but at the DCI4 layer. This would allow you to bind to the ports as you wish, and then have other systems that can connect to it, but not see your wider network. There are certainly other options as well – and it gets easier with IPv6 in the sense that allocating segments of your network to virtual systems can be much done much better. One other alternative is to use a broker in the emulated system to manage your registrations and then expose that through the emulating system to a service in the host which does the announcements for you. That requires more integration with a host system, but also allows you more freedom to interwork. However, the only real way that that sort of thing works well is with Bonjour, so meh… RPCEmu’s tap implementation is like the one I labelled ‘{RPCEmu}’ above, I think. VirtualRPC offers an implementation that is like the one I labelled ‘{VRPC}’ above. RISC OS Pyromaniac offers options like ‘{Pyro1}’, and an experimental branch provides an implementation using the emulated system’s DCI 4 stack like ‘{Pyro2}’, which you can attach to another RISC OS Pyromaniac system, or attach to many other RISC OS Pyromaniac systems using the virtual switch https://github.com/gerph/tuntap-json-server, which can then be bridged to a real network if you wish. Corrections gratefully received if I’ve misattributed one of the network types, or missed something important. The most significant thing is that there are many ways to emulate a system and expose the host’s resources. Don’t get stuck assuming that what is done by one system is the only way to do things. |
Dave Higton (1515) 3525 posts |
Theo and Charles, thank you both for your responses. One important point here is that it’s a multicast address and port. Port forwarding is not going to be any use; neither the host nor the client can be allowed exclusive access to this address and port, as they both need to send and receive through it. Traffic comes in from various sources at various times, and stuff that wasn’t requested has to be rejected according to its content. It’s multicast DNS that I’m talking about. |
Paolo Fabio Zaino (28) 1882 posts |
@ Charles
If you’re refering to my work on WASI/WASM (and/or also the my new UltimaVM), these are “Application level” bytecode interpreters, so they are useful to who writes applications for (or on) RISC OS. I.e you write your app targetting my UltimaVM and it runs fine on RISC OS, Linux, macOS, Windows and BSD without changes. The bytecode VM interprets bytecode roughly twice as fast as interpreted BBC BASIC and roughly 20% faster than RiscLua bytecode interpretation, and 35% faster than CPython basically (numbers may change as the project progresses). On the WASI side there is also a recompilation of Arculator for WASI, but that is just traditional emulation, so nothing as performant as RCPEmu or QEMU could be and the emulated hardware is an Archimedes. So, I think none of this is going to be useful for people’s requirements in this thread. On another experimental project, I did manage to build a “light weight” monolitic hypervisor that could run RISC OS and that indeed can abstract a Raspbery Pi 3, but at the moment it’s hypervisor only so, to use it on a Pi 5, it needs some adaptation and will require to finish the work to add MAMBO support to it for look-ahead binary translation from AArch32 to AArch64, which would result in pretty much same perfs as QEMU or RCPEmu. The idea behind this Hypervisor was to just abstract the hwardware and create a dedicated RISC OS HAL, so that porting RISC OS to a new AArch32 was just a matter of ensuring the hypervisor runs on it (given the hypervisor driver’s interface is identical to Linux, drivers porting is straight forward). For the AArch64 the idea was to add MAMBO support and look-ahead JIT code that belongs to the AArch32 VM thread in the hypervisor. My Hypervisor at the moment requires an MMU and a client OS that supports DT, so it’s not suitable for RISC OS as it is (falls in to what Theo was mentioning and I agree that DT support in the HAL would help a lot). From a performance point of view, there isn’t much to do. IMHO, it’ll be a battle of ~5% to 10% perfs difference across all these approaches (RCPEmu, QUEMU, whatever). While on the platform emulation certainly moving towards a Pi 2 or 3 it’s going to be better, but I don’t see any urgency TBH, given RISC OS doesn’t use much from modern hardware anyway. |
Charles Ferguson (8243) 427 posts |
I (and I suspect, Theo) wasn’t giving examples of port forwarding as solutions to your problems, but as examples of ways in which the network stack can be implemented. There are degrees of limitations in all of them – and multicast ports is one such limitation of systems that use that mechanism.
Multiple clients can listen for Multicast DNS if you’ve configured the sockets properly, and the emulating system handles them. Example: charles@phonewave ~/projects/RO/pyromaniac (report-error-vdu-experiment↑)> pyrodev --common --command gos Supervisor *pyromaniachostcommand netstat -na -p udp | grep 5353 udp6 0 0 *.5353 *.* udp4 0 0 *.5353 *.* Return code: 0 * *rmload modules.ResolverMDNS * * *inetstat -a Active Internet connections (including servers) Proto Recv-Q Send-Q Local address Foreign address State udp ? ? *:5353 - Listen * *pyromaniachostcommand netstat -na -p udp | grep 5353 udp4 0 0 *.5353 *.* udp6 0 0 *.5353 *.* udp4 0 0 *.5353 *.* Return code: 0 * In this example I start RISC OS Pyromaniac – as stated, this is a system that uses the host socket system.
If I start a separate RISC OS Pyromaniac in another window, I can even do it again… charles@phonewave ~/projects/RO/pyromaniac (report-error-vdu-experiment↑)> pyrodev --common --command gos Supervisor *rmload modules.ResolverMDNS * * *pyromaniachostcommand netstat -na -p udp | grep 5353 udp4 0 0 *.5353 *.* udp4 0 0 *.5353 *.* udp6 0 0 *.5353 *.* udp4 0 0 *.5353 *.* Return code: 0 * So it certainly can be done :-) |
Steve Pampling (1551) 8170 posts |
I’m aware that it can be compiled for various hardware, but the question relates to which pre-built setups you were testing. The question is, can you point to a specific pre-built set of binaries and a tested Invocation command string / script? |
Steve Pampling (1551) 8170 posts |
Same subnet, or routed through a gateway? |
Charles Ferguson (8243) 427 posts |
In my example, same host. In the configuration I was using, a Socket_Creat (and the rest) is mapped through (eventually) to socket() on the host system. RISC OS Pyromaniac and macOS network stack are effectively the same. The constants are mapped around, and structures are modified appropriately to make them use the host format, and in some cases the functionality is emulated where it cannot be provided exactly the same way. But they’re the same thing. |
Steve Pampling (1551) 8170 posts |
If you’d been dealing with two devices either side of a router/gateway then you’d need some helper config on the gateway. The RPCEmu NAT/routed network solution fails on that. |