|
From: Doug R. <df...@nl...> - 2004-01-15 20:16:47
|
On Thu, 2004-01-15 at 19:49, Jeremy Fitzhardinge wrote: > On Thu, 2004-01-15 at 10:37, Doug Rabson wrote: > > I put something together today (attached). I also had to tweak > > vg_mylibc.c somewhat since my pthreads library wanted to override > > sigprocmask, sigaction etc. I've attached the result for your amusement. > > Have you tested it much? Hm. The more I look at it, the more potential > problems I see. We're using a couple of signals for internal use, which > are the same ones as the Linux libpthread uses for itself (deliberately; > I wanted to keep things looking the same to the clients). We need to be > sure to pay attention to that (NPTL's sigaction prevents setting > handlers for those signals anyway, but of course we're going direct to > the kernel). I tested it on my FreeBSD build and it works ok for most of the tests which worked before (some of the tests fail for me anyway because of glibc-dependant stack traces etc.). The signal-dependant tests like signal2, sigaltstack seem to work. Yes, I'm aware of the internal signal thing and I left that alone apart from using pthread_kill to direct them to the proxy thread. It seems to me that VKI_SIGVGKILL might not be needed since pthread_cancel should be roughly equivalent. > > So I guess the prerequisite for this is that we use libc for signal > management, if nothing else. Is that what you did to vg_mylibc? Yes. Of the two pthread implementations I tested, one overrides all the signal calls to help it implement its many user threads to few kernel threads policy. > > > I left the pipes alone in the interests of simplicity. > > Good idea. There's a few things which depend on the pipes, so they'll > need some thinking about. > > > > Hm. There's still one hole in allowing unfettered use of system > > > libraries - we need to make sure that any FDs opened are pushed into > > > Valgrind's range. > > > > I didn't think of that. I don't think that any of the FreeBSD pthreads > > implementations has any internal file descriptors. I'm not sure about > > NPTL/linuxthreads. > > I don't think NPTL does, but LinuxThreads might because it has that > manager thread thing. Maybe it does it all with signals. If it does > use file descriptors, I presume it must do something to get them out of > the way of the app's FDs. The FD allocation algorithm (first available) > is part of the Unix API, and some programs depend on it, and would get > upset to find unexpected FDs lying around. > > But my real concern was other libraries which open files, etc. stdio, > for example. > > One possibility is to fully virtualize the client's FDs with a mapping > table. That means we need to be able to intercept all ways in which the > client and the kernel can refer to FDs to each other. For plain old > read/write/open/close that's pretty simple, but mucking about with > poll/select sets is a bit hairy and I'm a bit concerned about things > like ioctls which can generate new FDs by unconventional means. That would work pretty well in most cases. I don't think there can be many weird ways for ioctls to generate new FDs. We need to handle them all for proper fd leak checking anyway. > > > I think that it probably never happens in practice. Just implementing an > > ia64 jit at all will be enough work without worrying about ia32 > > switching. > > The impression I get is that the hardware ia32 emulation is so slow that > hardly anyone uses it; Intel have a software dynamic-compilation ia32 > emulator which is a lot faster (within spitting distance of a similarly > clocked Xeon). I was just reading about that yesterday on The Register. I've used the hardware ia32 emulation on Itanium 1 early hardware and it sucks big time. Software is likely to be lots faster. |