|
From: Jeremy F. <je...@go...> - 2002-10-01 06:49:53
|
I'm writing a skin to generate gprof-like output, so I need to see all
the edges in the control flow graph. In particular, I'd like to insert
some instrumentation code which is run IFF a conditional branch is
taken.
I see a few options:
* something to properly represent uInstr sequences with
conditionals within the ucode for one real instruction (ie,
some way of representing jumps to real addresses rather than
simulated addresses). Sounds messy.
* Intercept the jump target address and generate a completely
new piece of code at some place within the simulated address
space. Ugly.
* Introduce a new exceptional value for ebp when it is passed
back into the dispatcher to trigger a call into the skin.
Would need some way to attach some kind of argument values for
the call (encode in %edx?). Seems like the least nasty.
Any opinions?
J
|
|
From: Nicholas N. <nj...@ca...> - 2002-10-02 11:23:31
|
On 30 Sep 2002, Jeremy Fitzhardinge wrote: > I'm writing a skin to generate gprof-like output, so I need to see all > the edges in the control flow graph. In particular, I'd like to insert > some instrumentation code which is run IFF a conditional branch is > taken. > > I see a few options: > * something to properly represent uInstr sequences with > conditionals within the ucode for one real instruction (ie, > some way of representing jumps to real addresses rather than > simulated addresses). Sounds messy. > * Intercept the jump target address and generate a completely > new piece of code at some place within the simulated address > space. Ugly. > * Introduce a new exceptional value for ebp when it is passed > back into the dispatcher to trigger a call into the skin. > Would need some way to attach some kind of argument values for > the call (encode in %edx?). Seems like the least nasty. > > Any opinions? Best way I can think of doing it, which only requires skin changes rather than core changes, is this: using the `extended_UCode' need, add a new UInstr PRE_JCC, which gets inserted by SK_(instrument) before conditional JMPs, evaluates the condition, and calls a C function (or whatever) if it's true. This would duplicate the condition evaluation but that shouldn't matter since they're trivial (just checking an EFLAGS bit I think). It's a bit nasty that something as simple as this requires a new UInstr... Oh, and apologies for the delay in replying. N |
|
From: Jeremy F. <je...@go...> - 2002-10-02 16:24:33
|
On Wed, 2002-10-02 at 04:23, Nicholas Nethercote wrote:
Best way I can think of doing it, which only requires skin changes rather
than core changes, is this: using the `extended_UCode' need, add a new
UInstr PRE_JCC, which gets inserted by SK_(instrument) before conditional
JMPs, evaluates the condition, and calls a C function (or whatever) if
it's true. This would duplicate the condition evaluation but that
shouldn't matter since they're trivial (just checking an EFLAGS bit I
think).
It's a bit nasty that something as simple as this requires a new UInstr...
Well, I've actually come up with a simpler approach. Since what I want
is to get the (from, to) pair for a BB graph edge, I'm simply updating a
global (bb_from) with %EIP before each jump, and then create/update the
edge at the entry to each BB (bb_from, %EIP).
At present I'm using a single global, which means that I'll be creating
spurious edges when there's context switches between threads. The
obvious place to store the information is in the baseBlock, and have it
copied to/from the thread state on context switch. I didn't see a
mechanism for allocating variable space in the baseBlock, nor a way of
conveniently addressing baseBlock offsets directly. Should I add it?
Or some other way of storing per-thread information?
J
|
|
From: Nicholas N. <nj...@ca...> - 2002-10-02 19:25:22
|
On 2 Oct 2002, Jeremy Fitzhardinge wrote:
> At present I'm using a single global, which means that I'll be creating
> spurious edges when there's context switches between threads. The
> obvious place to store the information is in the baseBlock, and have it
> copied to/from the thread state on context switch. I didn't see a
> mechanism for allocating variable space in the baseBlock, nor a way of
> conveniently addressing baseBlock offsets directly. Should I add it?
> Or some other way of storing per-thread information?
Cachegrind stores variable-sized basic-block information. It is pretty
low-level and dirty: it allocates a flat array in which cost centres of
different sizes are all packed in together, with different cost centre
types distinguished by a tag. The basic blocks' arrays are stored in a
hash table.
Josef's patch uses the same basic mechanisms, but does more complicated
stuff with the hash tables.
So there's not really any built-in mechanism, but you can certainly
allocate yourself some space for each basic block in SK_(instrument). As
for addressing baseBlock offsets directly, I'm not sure what you mean --
the orig_addr is passed in to SK_(instrument); is that not enough? I'm
also not sure how threads ("per-thread information") relate to this.
N
|
|
From: Jeremy F. <je...@go...> - 2002-10-02 20:41:31
|
On Wed, 2002-10-02 at 12:25, Nicholas Nethercote wrote:
On 2 Oct 2002, Jeremy Fitzhardinge wrote:
> At present I'm using a single global, which means that I'll be creating
> spurious edges when there's context switches between threads. The
> obvious place to store the information is in the baseBlock, and have it
> copied to/from the thread state on context switch. I didn't see a
> mechanism for allocating variable space in the baseBlock, nor a way of
> conveniently addressing baseBlock offsets directly. Should I add it?
> Or some other way of storing per-thread information?
Cachegrind stores variable-sized basic-block information. It is pretty
low-level and dirty: it allocates a flat array in which cost centres of
different sizes are all packed in together, with different cost centre
types distinguished by a tag. The basic blocks' arrays are stored in a
hash table.
Yes, I've got that. I have a hash which keeps per-basic-block
information. But what I also want it a hash which keeps a count of
control flow edges between basic blocks. That is, the key of the hash
is not orig_eip, but the tuple (from_bb, to_bb). The way I maintain
this is by inserting an assignment to a global variable "prev_bb" (ie,
code to do prev_bb = cur_eip) just before each JMP instruction
(conditional or otherwise). Then, at the start of each basic block, I
update the edge count structure by looking up (and possibly creating)
the tuple (prev_bb, cur_eip).
The trouble with this scheme is that if the dispatch loop decides that
it is time to switch threads, prev_bb will have been set by the previous
thread, and therefore the control flow graph will have spurious edges
which represent context switches. While this isn't completely
undesirable, it isn't what I want to measure at the moment.
To solve this, prev_bb needs to be a per-thread value rather than a
global one. It seems to me that a clean way of solving this is to
introduce a mechanism which is analogous to VG_(register_*_helper) which
allows a skin to allocate space in the baseBlock, with a change to the
scheduler to save and restore the values on context switch and some way
to generate uInstr code to load and store them.
J
|
|
From: Nicholas N. <nj...@ca...> - 2002-10-03 08:51:17
|
On 2 Oct 2002, Jeremy Fitzhardinge wrote: > To solve this, prev_bb needs to be a per-thread value rather than a > global one. It seems to me that a clean way of solving this is to > introduce a mechanism which is analogous to VG_(register_*_helper) which > allows a skin to allocate space in the baseBlock, with a change to the > scheduler to save and restore the values on context switch and some way > to generate uInstr code to load and store them. Do you need to store this information in baseBlock? You could do it with global variables in your skin. There's a UCode-generating function VG_(set_global_var) that might be useful for this. N |