|
From: Jeremy F. <je...@go...> - 2004-03-23 07:16:21
|
Quoting "KJK::Hyperion" <no...@li...>: > I'm not too worried about memory usage (well, there's the issue of > placing > Valgrind data so that it doesn't conflict with certain non-relocable > system > DLLs... but there should be plenty of room in the middle). Separation of > > address spaces is more a matter of "playing by the rules". Yes, but there is the issue of simply running out of address space. The numbers you mention below suggest that there's less that 2G of address space for applications under Windows, which means that if the client is sharing the address space with shadow data, there is less than 1G for the client's own use. > Anyway, how > do > tools register with the JIT engine so they are called at certain points? Um, well, they get to instrument the code as it goes through the JIT. There's also special callbacks for things like allocations, but the majority is done with instrumentation. > because the issue now is whether they can store most data in Valgrind's > > process and only require small "registration data" in the client, or > not. > Ideally, all tools (except maybe memcheck) should run in Valgrind's > process > and be called through some form of RPC by the JIT (running in the > client), > so their execution won't interfere with the client Why do you say that? Memcheck, addrcheck, cachegrind, and helgrind use shadow memory a lot (at least every memory access), and making access to the shadow any slower would have enormous performance effects. > Apropos, I've downloaded some CVS release, but I have a hard time > understanding much of it. Basically, all I've understood is that > Valgrind > has its own scheduler. The rest looks pretty obscure. What do you think > > would be the best way to get started on Valgrind internals? Julian's internals document is still a reasonable start for the overall design, though many of the details have changed. Using --trace-* options will give you some idea about what's going on inside. It isn't wildly complex, but there are a lot of details. > On an unrelated topic: does the core code depend on GCCisms? I've only > seen > surprisingly little inline assembler, some expression-with-statements, > > noreturn functions and functions with registry parameters - all of which > > have some equivalent in Windows compilers - and playing games with > symbol > names, which doesn't have effect on Win32. Is there much more? Local functions with lexically-scoped variables are probably the most unportable gcc extension. > >It is really multithreaded as far as the client is concerned; > > *this* is what I'm not sure about. I've read the latest Microsoft SQL > Server has its own scheduler, and I've read a pretty detailed > description > of it on the weblog of some Microsoft guy. It looks like it can work > only > for very specific operations, like only for file I/O - SQL server can > afford it because, like all database servers, it's largely > self-contained, > but most applications aren't Well, for each application level thread, Valgrind creates a kernel thread in order to deal with blocking syscalls. It's just that the application code itself doesn't run in that thread. In other words, Valgrind looks like a multi-threaded program to the kernel, even if it does simple time-slicing within one thread for the client application threading. > >Hm, that isn't all that high. Does that mean a process has less than > 2G > >of available address space under XP? > > maybe I'm confusing addresses. You know, all those hexadecimal digits... > > anyway </me fetches calculator>, the highest user-mode address is > reported > here (Windows 2000) as being 0x7FFEFFFF, meaning 64 Kb are unavailable. > The > shared read-only data begins at 0x7FFE0000, and includes the tick > counter, > some information about the kernel and of course the system call thunk. > Not > sure where the probe address is at, and if its semantics are what I > believe > they are (probably not) Well, that's only about 2G. Typically under linux, the client address space is from 0-3G (though it can be different for different kernel configurations). [ sorry about the formatting - nasty webmail ] J |