You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
|
|
1
|
|
2
|
3
|
4
|
5
(2) |
6
(1) |
7
|
8
|
|
9
(1) |
10
(1) |
11
|
12
|
13
|
14
(3) |
15
(4) |
|
16
(4) |
17
(2) |
18
(18) |
19
|
20
|
21
(7) |
22
|
|
23
(2) |
24
(3) |
25
(1) |
26
(5) |
27
(12) |
28
(1) |
29
(2) |
|
30
(4) |
31
|
|
|
|
|
|
|
From: Julian S. <js...@ac...> - 2003-03-30 11:35:29
|
Hi. I'm an author of valgrind, a memory debugger which I believe at least one person has used on FlightGear. We've just extended valgrind to handle MMX instructions, and it works on some simple-ish test cases. I was wondering if FlightGear can be configured to use MMX, and, if so, whether anyone would care to try running such a configuration on valgrind. It would be useful for us, to gain early feedback on our MMX work. It might be useful for you if MMXised code is something you want to debug. If you want to try this, you need the valgrind cvs head via anoncvs at sourceforge.net. Mail me (js...@ac...) for help on getting + building it. We hope to find the time, soon, to attempt support of SSE/SSE2 instructions. If you know of any other projects which might like to try MMX on Valgrind, please feel free to pass this message along. Thanks, J |
|
From: Nicholas N. <nj...@ca...> - 2003-03-30 09:11:31
|
On Sun, 30 Mar 2003, Julian Seward wrote:
> any skin which wishes to track stack memory permissions must provide at
> least the general functions
>
> new_mem_stack
> die_mem_stack
>
> They may optionally provide any subset of the following:
>
> {new,die}_mem_stack_{4,8,12,16,20,24,28,32}
>
> in which case those will get called in preference, in cases where the code
> generator can establish what the delta is, and generally when convenient for
> the code generator. Code generator will try to call the specialised fns but
> does not promise to do so, if it isn't convenient for it.
>
> The specialised functions are passed %ESP after the adjustment has been made,
> ie the new value. The general fns are passed old and new values. None of the
> functions can make any assumptions about alignment; they have to check for
> themselves.
Yes. Or maybe the specialised functions can be passed the old %ESP, that
might make slightly more sense for them.
Something else occurred to me though. This "trackable event" is something
that the skin could do -- it could spot PUT %ESP instructions itself --
it's currently done by the core just as a convenience for skins.
So my second idea is this: make the skins do this. Don't have any core
"trackable events" for this (new_mem_stack, etc). Instead, have the core
provide a function VG_(ESP_changed_by)(). When a skin instruments a PUT
instruction, it can call this to see what the ESP delta is. Some special
value (0? 0xcfffffff? 0x87654321?) would indicate "don't know". The skin
can then choose to CCALL whatever function it likes based on this value,
eg. a specialised one if it was a common delta, or a general one if an
uncommon or unknown delta.
It would have exactly the same effect as doing it with "trackable events",
but gives slightly more flexibility to the skin -- it could have as many
specialised %esp-adjustment functions as it wanted, which could have any
function arguments it wanted -- and would not saddle the core with this
family of "trackable events". But it wouldn't make life any more
difficult for the skin... well the only onus is on the skin writer to
realise they have to use this function if they want to know about
%esp-adjustments. But skin writers have to know about a lot already, so I
don't think this is so bad.
Combining this idea with the move-malloc-into-skins idea, and we've lost
quite a few "trackable events" -- new_mem_heap, new_mem_stack, etc. This
represents a minor shift in philosophy. Previously, "trackable events"
were for things that couldn't be detected by the skin, plus enough other
events to cover all memory operations of interest. With these proposed
changes, "trackable events" would only cover those things that couldn't be
detected by the skin.
In one way this is bad -- there's no uniform mechanism for detecting all
interesting memory ops. But in another way it's good -- the core is purer
(more RISC-like, if that sounds better :) since "trackable" events really
are only those things that cannot be detected by a skin.
To me, this feels like the right way to do things.
> As a sanity check, let me ask (possibly again): does this scheme in any way
> mess up the nice core/skin split, or generally clog up your architectural
> cleanups in any other way?
No. And with my new proposed change, it's even better :)
N
|
|
From: Robert W. <rj...@du...> - 2003-03-30 04:24:29
|
Hi all,
The attached patch keeps track of all open file descriptors and dumps
out a summary at the end of the run if --leak-check is used. It knows
about fds that have been inherited from the parent process. The
information it dumps out is the fd and a backtrace from where it was
opened.
Some caveats/questions:
* Right now, it doesn't handle fds that have been passed over a
socket using sendmsg. This shouldn't be too difficult to add -
I'll try get it done in the next couple of days.
* If /proc is missing (which is unlikely, but you never know) then
it doesn't notice fds that have been inherited from the parent
process. This would be easyish to fix, too.
* It should dump out more information about each open fd - the
filename or socket information would be useful, for example.
Again, I'll add that in in the next couple of days.
* Should it use it's own command-line option instead of
piggy-backing on top of --leak-check?
The patch is against the CVS head as of 8PM PST on March 29th. I'm
releasing it now despite all of the above missing functionality because
it's still useful. Please let me know what you think.
Regards,
Robert.
--
Robert Walsh
Amalgamated Durables, Inc. - "We don't make the things you buy."
Email: rj...@du...
|
|
From: Julian S. <js...@ac...> - 2003-03-30 01:20:01
|
> Your idea is quite plausible. Basically the skins would bypass the core's
> built-in way of handling stack updates in order to do them itself, because
> it can be more efficient that way.
>
> Which makes me think: if we want to make this improvement for
> {Mem,Addr}check, any other skin that tracks %esp changes probably wants it
> as well. So let's try to improve the general mechanism rather than
> implementing a {Mem,Addr}check-only optimisation.
Yes, indeed. But your suggestion below is an improvement to the general
mechanism, not just to {Mem,Addr}check, yes?
> You could then have a number of trackable events for skins to hook into:
>
> new_mem_stack_aligned_4
> new_mem_stack_aligned_8
> etc.
> die_mem_stack_aligned_4
> die_mem_stack_aligned_8
> etc.
>
> for the most common cases (eg. 4, 8, 12, 16, 20, 24). They would be
> passed the old %esp. The skins could have unrolled versions of the
> general stack-adjusting code for these cases.
>
> Also have:
>
> new_mem_stack_aligned_gen
> new_mem_stack
> die_mem_stack_aligned_gen
> die_mem_stack
>
> If a skin didn't provide these special case functions, the core could fall
> back to using the general case ones if they were provided -- this would be
> useful when first writing skins, when you don't want to write five
> different versions of the same function. Ie. new_mem_stack_aligned_4
> would be used if present, but fall back to new_mem_stack_aligned_gen if
> present, but fall back to new_mem_stack if present, else do nothing.
>
> One complication -- how do we know at compile-time if a stack-adjustment
> is aligned? We can't (AFAICT) so maybe the events shouldn't have any
> mention of alignment, and it's up to the skin to do an alignment check and
> speed up its actions based on this if it wants. So the events might be
> new_mem_stack_4, new_mem_stack_8, ..., new_mem_stack_gen.
Yes. Although in practice the stack is almost always kept at least 4-aligned,
we can't be sure of this, so all the functions need to check this.
Tell me if the following proposal makes sense:
-------------
any skin which wishes to track stack
memory permissions must provide at least the general functions
new_mem_stack
die_mem_stack
They may optionally provide any subset of the following:
{new,die}_mem_stack_{4,8,12,16,20,24,28,32}
in which case those will get called in preference, in cases where the code
generator can establish what the delta is, and generally when convenient for
the code generator. Code generator will try to call the specialised fns but
does not promise to do so, if it isn't convenient for it.
The specialised functions are passed %ESP after the adjustment has been made,
ie the new value. The general fns are passed old and new values. None of the
functions can make any assumptions about alignment; they have to check for
themselves.
-------------
Another performance benefit, albeit a small one, is that the specialised
fns take one param rather than two, so that means less register shuffling
etc in the call sequence.
As a sanity check, let me ask (possibly again): does this scheme in any way
mess up the nice core/skin split, or generally clog up your architectural
cleanups in any other way?
> I'll definitely look into this once I've finished looking at moving
> malloc() et al out of core.
Great.
J
|