|
From: Niall D. <ndo...@bl...> - 2013-04-18 14:40:25
|
> On 4/17/2013 5:58 PM, John Reiser wrote: > >> Let us assume that system calls aren't a problem, as we can work around > >> those here thanks to QNX's micro kernel nature by simply pushing syscalls > >> over an IP link. What would be the next major show stopper, purely from > >> within valgrind itself? > > The target (ARM) shared libraries. Logically there are two separate > > name spaces for "the" filesystem (ARM target vs. x86 host) but physically > > there is only one (the x86 host). Of course you can try LD_LIBRARY_PATH, > > etc., but probably that's only the beginning. Perhaps I miscommunicated. QNX runs on x86 just as it does on ARM. It's micro-kernel, so it can distribute. QNX prefers if the nodes it distributes itself over are the same endian, but other than that it doesn't really care. In other words, just to be absolutely clear, you can have an x86 QNX process running on a PC yet using an ARM QNX kernel running on an attached ARM device. That x86 QNX process will see a world identical in every way to as if it were running on the ARM device: same filesystem, context switches, everything. So what I'm saying is: discount any worries about paths and libraries not matching up. What remains in valgrind itself to get an x86 compiled binary to valgrind an ARM executable? > You are describing a binary translation system - not too bad when the > host and target OS's are similar in semantics. In this situation they're identical. Different architectures, but the code won't notice. > DEC's binary translation systems come to mind. Intercept the system > call instructions themselves, and revector to jackets. In micro-kernel there is no such thing as a syscall. Just messaging, which is transport agnostic. In our valgrind port to QNX we emulate a syscall interface for the benefit of valgrind only. Niall |