You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
1
(7) |
2
(9) |
3
(7) |
4
(5) |
5
(5) |
6
(16) |
|
7
(20) |
8
(9) |
9
(15) |
10
(11) |
11
(13) |
12
(12) |
13
(10) |
|
14
(13) |
15
(6) |
16
(10) |
17
(12) |
18
(17) |
19
(3) |
20
(10) |
|
21
(11) |
22
(7) |
23
(6) |
24
(7) |
25
(5) |
26
(5) |
27
(14) |
|
28
(6) |
29
(4) |
30
(5) |
|
|
|
|
Author: sewardj
Date: 2008-09-06 20:34:12 +0100 (Sat, 06 Sep 2008)
New Revision: 8572
Log:
* add cost-center annotations to all allocation points in the core.
* add handling for the new flag --profile-heap
Modified:
branches/YARD/coregrind/m_commandline.c
branches/YARD/coregrind/m_coredump/coredump-elf.c
branches/YARD/coregrind/m_demangle/cp-demangle.c
branches/YARD/coregrind/m_demangle/cplus-dem.c
branches/YARD/coregrind/m_demangle/dyn-string.c
branches/YARD/coregrind/m_errormgr.c
branches/YARD/coregrind/m_execontext.c
branches/YARD/coregrind/m_hashtable.c
branches/YARD/coregrind/m_initimg/initimg-linux.c
branches/YARD/coregrind/m_libcproc.c
branches/YARD/coregrind/m_main.c
branches/YARD/coregrind/m_options.c
branches/YARD/coregrind/m_oset.c
branches/YARD/coregrind/m_redir.c
branches/YARD/coregrind/m_replacemalloc/replacemalloc_core.c
branches/YARD/coregrind/m_signals.c
branches/YARD/coregrind/m_stacks.c
branches/YARD/coregrind/m_syswrap/syswrap-generic.c
branches/YARD/coregrind/m_syswrap/syswrap-x86-linux.c
branches/YARD/coregrind/m_transtab.c
branches/YARD/coregrind/m_ume.c
branches/YARD/coregrind/m_wordfm.c
branches/YARD/coregrind/m_xarray.c
branches/YARD/coregrind/pub_core_debuginfo.h
branches/YARD/coregrind/pub_core_options.h
branches/YARD/include/pub_tool_oset.h
branches/YARD/include/pub_tool_wordfm.h
branches/YARD/include/pub_tool_xarray.h
Modified: branches/YARD/coregrind/m_commandline.c
===================================================================
--- branches/YARD/coregrind/m_commandline.c 2008-09-06 19:29:04 UTC (rev 8571)
+++ branches/YARD/coregrind/m_commandline.c 2008-09-06 19:34:12 UTC (rev 8572)
@@ -67,7 +67,7 @@
if ( !fd.isError ) {
size = VG_(fsize)(fd.res);
if (size > 0) {
- f_clo = VG_(malloc)(size+1);
+ f_clo = VG_(malloc)("commandline.rdv.1", size+1);
vg_assert(f_clo);
n = VG_(read)(fd.res, f_clo, size);
if (n == -1) n = 0;
@@ -154,17 +154,20 @@
vg_assert(!already_called);
already_called = True;
- tmp_xarray = VG_(newXA)( VG_(malloc), VG_(free), sizeof(HChar*) );
+ tmp_xarray = VG_(newXA)( VG_(malloc), "commandline.sua.1",
+ VG_(free), sizeof(HChar*) );
vg_assert(tmp_xarray);
vg_assert( ! VG_(args_for_valgrind) );
VG_(args_for_valgrind)
- = VG_(newXA)( VG_(malloc), VG_(free), sizeof(HChar*) );
+ = VG_(newXA)( VG_(malloc), "commandline.sua.2",
+ VG_(free), sizeof(HChar*) );
vg_assert( VG_(args_for_valgrind) );
vg_assert( ! VG_(args_for_client) );
VG_(args_for_client)
- = VG_(newXA)( VG_(malloc), VG_(free), sizeof(HChar*) );
+ = VG_(newXA)( VG_(malloc), "commandline.sua.3",
+ VG_(free), sizeof(HChar*) );
vg_assert( VG_(args_for_client) );
/* Collect up the args-for-V. */
@@ -203,7 +206,8 @@
// put into VG_(args_for_valgrind) and so must persist.
HChar* home = VG_(getenv)("HOME");
HChar* f1_clo = home ? read_dot_valgrindrc( home ) : NULL;
- HChar* env_clo = VG_(strdup)( VG_(getenv)(VALGRIND_OPTS) );
+ HChar* env_clo = VG_(strdup)( "commandline.sua.4",
+ VG_(getenv)(VALGRIND_OPTS) );
HChar* f2_clo = NULL;
// Don't read ./.valgrindrc if "." is the same as "$HOME", else its
Modified: branches/YARD/coregrind/m_coredump/coredump-elf.c
===================================================================
--- branches/YARD/coregrind/m_coredump/coredump-elf.c 2008-09-06 19:29:04 UTC (rev 8571)
+++ branches/YARD/coregrind/m_coredump/coredump-elf.c 2008-09-06 19:34:12 UTC (rev 8572)
@@ -79,7 +79,7 @@
n_starts = 1;
while (True) {
- starts = VG_(malloc)( n_starts * sizeof(Addr) );
+ starts = VG_(malloc)( "coredump-elf.gss.1", n_starts * sizeof(Addr) );
if (starts == NULL)
break;
r = VG_(am_get_segment_starts)( starts, n_starts );
@@ -184,7 +184,7 @@
Int notelen = sizeof(struct note) +
VG_ROUNDUP(namelen, 4) +
VG_ROUNDUP(datasz, 4);
- struct note *n = VG_(arena_malloc)(VG_AR_CORE, notelen);
+ struct note *n = VG_(arena_malloc)(VG_AR_CORE, "coredump-elf.an.1", notelen);
VG_(memset)(n, 0, notelen);
@@ -349,7 +349,8 @@
notelist = NULL;
/* Second, work out their layout */
- phdrs = VG_(arena_malloc)(VG_AR_CORE, sizeof(*phdrs) * num_phdrs);
+ phdrs = VG_(arena_malloc)(VG_AR_CORE, "coredump-elf.mec.1",
+ sizeof(*phdrs) * num_phdrs);
for(i = 1; i < VG_N_THREADS; i++) {
vki_elf_fpregset_t fpu;
Modified: branches/YARD/coregrind/m_demangle/cp-demangle.c
===================================================================
--- branches/YARD/coregrind/m_demangle/cp-demangle.c 2008-09-06 19:29:04 UTC (rev 8571)
+++ branches/YARD/coregrind/m_demangle/cp-demangle.c 2008-09-06 19:34:12 UTC (rev 8572)
@@ -51,9 +51,9 @@
#ifndef STANDALONE
#define size_t Int
-#define malloc(s) VG_(arena_malloc) (VG_AR_DEMANGLE, s)
-#define free(p) VG_(arena_free) (VG_AR_DEMANGLE, p)
-#define realloc(p,s) VG_(arena_realloc)(VG_AR_DEMANGLE, p, s)
+#define malloc(_cc,s) VG_(arena_malloc) (VG_AR_DEMANGLE, _cc, s)
+#define free(p) VG_(arena_free) (VG_AR_DEMANGLE, p)
+#define realloc(_cc,p,s) VG_(arena_realloc)(VG_AR_DEMANGLE, _cc, p, s)
#endif
/* If CP_DEMANGLE_DEBUG is defined, a trace of the grammar evaluation,
@@ -423,7 +423,8 @@
string_list_new (length)
int length;
{
- string_list_t s = (string_list_t) malloc (sizeof (struct string_list_def));
+ string_list_t s = (string_list_t) malloc ("demangle.sln.1",
+ sizeof (struct string_list_def));
if (s == NULL)
return NULL;
s->caret_position = 0;
@@ -594,7 +595,7 @@
sizeof (struct substitution_def) * dm->substitutions_allocated;
dm->substitutions = (struct substitution_def *)
- realloc (dm->substitutions, new_array_size);
+ realloc ("demangle.sa.1", dm->substitutions, new_array_size);
if (dm->substitutions == NULL)
/* Realloc failed. */
{
@@ -672,7 +673,8 @@
template_arg_list_new ()
{
template_arg_list_t new_list =
- (template_arg_list_t) malloc (sizeof (struct template_arg_list_def));
+ (template_arg_list_t) malloc ("demangle.talt.1",
+ sizeof (struct template_arg_list_def));
if (new_list == NULL)
return NULL;
/* Initialize the new list to have no arguments. */
@@ -820,7 +822,8 @@
int style;
{
demangling_t dm;
- dm = (demangling_t) malloc (sizeof (struct demangling_def));
+ dm = (demangling_t) malloc ("demangle.dn.1",
+ sizeof (struct demangling_def));
if (dm == NULL)
return NULL;
@@ -834,7 +837,8 @@
if (dm->last_source_name == NULL)
return NULL;
dm->substitutions = (struct substitution_def *)
- malloc (dm->substitutions_allocated * sizeof (struct substitution_def));
+ malloc ("demangle.dn.2",
+ dm->substitutions_allocated * sizeof (struct substitution_def));
if (dm->substitutions == NULL)
{
dyn_string_delete (dm->last_source_name);
Modified: branches/YARD/coregrind/m_demangle/cplus-dem.c
===================================================================
--- branches/YARD/coregrind/m_demangle/cplus-dem.c 2008-09-06 19:29:04 UTC (rev 8571)
+++ branches/YARD/coregrind/m_demangle/cplus-dem.c 2008-09-06 19:34:12 UTC (rev 8572)
@@ -76,10 +76,10 @@
#ifndef STANDALONE
#define size_t Int
-#define xstrdup(ptr) VG_(arena_strdup) (VG_AR_DEMANGLE, ptr)
-#define free(ptr) VG_(arena_free) (VG_AR_DEMANGLE, ptr)
-#define xmalloc(size) VG_(arena_malloc) (VG_AR_DEMANGLE, size)
-#define xrealloc(ptr, size) VG_(arena_realloc)(VG_AR_DEMANGLE, ptr, size)
+#define xstrdup(_cc,ptr) VG_(arena_strdup) (VG_AR_DEMANGLE, _cc, ptr)
+#define free(ptr) VG_(arena_free) (VG_AR_DEMANGLE, ptr)
+#define xmalloc(_cc,size) VG_(arena_malloc) (VG_AR_DEMANGLE, _cc, size)
+#define xrealloc(_cc,ptr, size) VG_(arena_realloc)(VG_AR_DEMANGLE, _cc, ptr, size)
#define abort() vg_assert(0)
#undef strstr
@@ -948,7 +948,7 @@
struct work_stuff work[1];
if (current_demangling_style == no_demangling)
- return xstrdup (mangled);
+ return xstrdup ("demangle.cd.1", mangled);
memset ((char *) work, 0, sizeof (work));
work->options = options;
@@ -995,7 +995,7 @@
*size *= 2;
if (*size < min_size)
*size = min_size;
- *old_vect = xrealloc (*old_vect, *size * element_size);
+ *old_vect = xrealloc ("demangle.gv.1", *old_vect, *size * element_size);
}
}
@@ -1219,55 +1219,60 @@
/* Deep-copy dynamic storage. */
if (from->typevec_size)
to->typevec
- = (char **) xmalloc (from->typevec_size * sizeof (to->typevec[0]));
+ = (char **) xmalloc ("demangle.wsctf.1",
+ from->typevec_size * sizeof (to->typevec[0]));
for (i = 0; i < from->ntypes; i++)
{
int len = strlen (from->typevec[i]) + 1;
- to->typevec[i] = xmalloc (len);
+ to->typevec[i] = xmalloc ("demangle.wsctf.2", len);
memcpy (to->typevec[i], from->typevec[i], len);
}
if (from->ksize)
to->ktypevec
- = (char **) xmalloc (from->ksize * sizeof (to->ktypevec[0]));
+ = (char **) xmalloc ("demangle.wsctf.3",
+ from->ksize * sizeof (to->ktypevec[0]));
for (i = 0; i < from->numk; i++)
{
int len = strlen (from->ktypevec[i]) + 1;
- to->ktypevec[i] = xmalloc (len);
+ to->ktypevec[i] = xmalloc ("demangle.wsctf.4", len);
memcpy (to->ktypevec[i], from->ktypevec[i], len);
}
if (from->bsize)
to->btypevec
- = (char **) xmalloc (from->bsize * sizeof (to->btypevec[0]));
+ = (char **) xmalloc ("demangle.wsctf.5",
+ from->bsize * sizeof (to->btypevec[0]));
for (i = 0; i < from->numb; i++)
{
int len = strlen (from->btypevec[i]) + 1;
- to->btypevec[i] = xmalloc (len);
+ to->btypevec[i] = xmalloc ("demangle.wsctf.6", len);
memcpy (to->btypevec[i], from->btypevec[i], len);
}
if (from->ntmpl_args)
to->tmpl_argvec
- = xmalloc (from->ntmpl_args * sizeof (to->tmpl_argvec[0]));
+ = xmalloc ("demangle.wsctf.7",
+ from->ntmpl_args * sizeof (to->tmpl_argvec[0]));
for (i = 0; i < from->ntmpl_args; i++)
{
int len = strlen (from->tmpl_argvec[i]) + 1;
- to->tmpl_argvec[i] = xmalloc (len);
+ to->tmpl_argvec[i] = xmalloc ("demangle.wsctf.8", len);
memcpy (to->tmpl_argvec[i], from->tmpl_argvec[i], len);
}
if (from->previous_argument)
{
- to->previous_argument = (string*) xmalloc (sizeof (string));
+ to->previous_argument = (string*) xmalloc ("demangle.wsctf.9",
+ sizeof (string));
string_init (to->previous_argument);
string_appends (to->previous_argument, from->previous_argument);
}
@@ -2018,7 +2023,7 @@
string_appendn (s, "0", 1);
else
{
- char *p = xmalloc (symbol_len + 1), *q;
+ char *p = xmalloc ("demangle.dtvp.1", symbol_len + 1), *q;
strncpy (p, *mangled, symbol_len);
p [symbol_len] = '\0';
/* We use cplus_demangle here, rather than
@@ -2133,7 +2138,8 @@
if (!is_type)
{
/* Create an array for saving the template argument values. */
- work->tmpl_argvec = (char**) xmalloc (r * sizeof (char *));
+ work->tmpl_argvec = (char**) xmalloc ("demangle.dt.1",
+ r * sizeof (char *));
work->ntmpl_args = r;
for (i = 0; i < r; i++)
work->tmpl_argvec[i] = 0;
@@ -2158,7 +2164,7 @@
{
/* Save the template argument. */
int len = temp.p - temp.b;
- work->tmpl_argvec[i] = xmalloc (len + 1);
+ work->tmpl_argvec[i] = xmalloc ("demangle.dt.2", len + 1);
memcpy (work->tmpl_argvec[i], temp.b, len);
work->tmpl_argvec[i][len] = '\0';
}
@@ -2186,7 +2192,7 @@
{
/* Save the template argument. */
int len = r2;
- work->tmpl_argvec[i] = xmalloc (len + 1);
+ work->tmpl_argvec[i] = xmalloc ("demangle.dt.3", len + 1);
memcpy (work->tmpl_argvec[i], *mangled, len);
work->tmpl_argvec[i][len] = '\0';
}
@@ -2232,7 +2238,7 @@
if (!is_type)
{
int len = s->p - s->b;
- work->tmpl_argvec[i] = xmalloc (len + 1);
+ work->tmpl_argvec[i] = xmalloc ("demangle.dt.4", len + 1);
memcpy (work->tmpl_argvec[i], s->b, len);
work->tmpl_argvec[i][len] = '\0';
@@ -3131,7 +3137,7 @@
char * recurse = (char *)NULL;
char * recurse_dem = (char *)NULL;
- recurse = (char *) xmalloc (namelength + 1);
+ recurse = (char *) xmalloc ("demangle.rd.1", namelength + 1);
memcpy (recurse, *mangled, namelength);
recurse[namelength] = '\000';
@@ -4130,7 +4136,7 @@
string_append (result, "&");
/* Now recursively demangle the literal name */
- recurse = (char *) xmalloc (literal_len + 1);
+ recurse = (char *) xmalloc ("demangle.dhtl.1", literal_len + 1);
memcpy (recurse, *mangled, literal_len);
recurse[literal_len] = '\000';
@@ -4240,7 +4246,8 @@
string_clear (work->previous_argument);
else
{
- work->previous_argument = (string*) xmalloc (sizeof (string));
+ work->previous_argument = (string*) xmalloc ("demangle.da.1",
+ sizeof (string));
string_init (work->previous_argument);
}
@@ -4275,17 +4282,18 @@
{
work -> typevec_size = 3;
work -> typevec
- = (char **) xmalloc (sizeof (char *) * work -> typevec_size);
+ = (char **) xmalloc ("demangle.rt.1",
+ sizeof (char *) * work -> typevec_size);
}
else
{
work -> typevec_size *= 2;
work -> typevec
- = (char **) xrealloc ((char *)work -> typevec,
+ = (char **) xrealloc ("demangle.rt.2", (char *)work -> typevec,
sizeof (char *) * work -> typevec_size);
}
}
- tem = xmalloc (len + 1);
+ tem = xmalloc ("demangle.rt.3", len + 1);
memcpy (tem, start, len);
tem[len] = '\0';
work -> typevec[work -> ntypes++] = tem;
@@ -4307,17 +4315,18 @@
{
work -> ksize = 5;
work -> ktypevec
- = (char **) xmalloc (sizeof (char *) * work -> ksize);
+ = (char **) xmalloc ("demangle.rK.1",
+ sizeof (char *) * work -> ksize);
}
else
{
work -> ksize *= 2;
work -> ktypevec
- = (char **) xrealloc ((char *)work -> ktypevec,
+ = (char **) xrealloc ("demangle.rK.2", (char *)work -> ktypevec,
sizeof (char *) * work -> ksize);
}
}
- tem = xmalloc (len + 1);
+ tem = xmalloc ("demangle.rK.3", len + 1);
memcpy (tem, start, len);
tem[len] = '\0';
work -> ktypevec[work -> numk++] = tem;
@@ -4339,13 +4348,14 @@
{
work -> bsize = 5;
work -> btypevec
- = (char **) xmalloc (sizeof (char *) * work -> bsize);
+ = (char **) xmalloc ("demangle.rB.1",
+ sizeof (char *) * work -> bsize);
}
else
{
work -> bsize *= 2;
work -> btypevec
- = (char **) xrealloc ((char *)work -> btypevec,
+ = (char **) xrealloc ("demangle.rB.2", (char *)work -> btypevec,
sizeof (char *) * work -> bsize);
}
}
@@ -4364,7 +4374,7 @@
{
char *tem;
- tem = xmalloc (len + 1);
+ tem = xmalloc ("demangle.remember_Btype.1", len + 1);
memcpy (tem, start, len);
tem[len] = '\0';
work -> btypevec[ind] = tem;
@@ -4815,7 +4825,7 @@
{
n = 32;
}
- s->p = s->b = xmalloc (n);
+ s->p = s->b = xmalloc ("demangle.sn.1", n);
s->e = s->b + n;
}
else if (s->e - s->p < n)
@@ -4823,7 +4833,7 @@
tem = s->p - s->b;
n += tem;
n *= 2;
- s->b = xrealloc (s->b, n);
+ s->b = xrealloc ("demangle.sn.2", s->b, n);
s->p = s->b + tem;
s->e = s->b + n;
}
Modified: branches/YARD/coregrind/m_demangle/dyn-string.c
===================================================================
--- branches/YARD/coregrind/m_demangle/dyn-string.c 2008-09-06 19:29:04 UTC (rev 8571)
+++ branches/YARD/coregrind/m_demangle/dyn-string.c 2008-09-06 19:34:12 UTC (rev 8572)
@@ -39,9 +39,9 @@
#include "dyn-string.h"
#ifndef STANDALONE
-#define malloc(s) VG_(arena_malloc) (VG_AR_DEMANGLE, s)
-#define free(p) VG_(arena_free) (VG_AR_DEMANGLE, p)
-#define realloc(p,s) VG_(arena_realloc)(VG_AR_DEMANGLE, p, s)
+#define malloc(_cc,s) VG_(arena_malloc) (VG_AR_DEMANGLE, _cc, s)
+#define free(p) VG_(arena_free) (VG_AR_DEMANGLE, p)
+#define realloc(_cc,p,s) VG_(arena_realloc)(VG_AR_DEMANGLE, _cc, p, s)
#endif
/* If this file is being compiled for inclusion in the C++ runtime
@@ -77,7 +77,7 @@
if (ds_struct_ptr->s == NULL)
return 0;
#else
- ds_struct_ptr->s = (char *) malloc (space);
+ ds_struct_ptr->s = (char *) malloc ("demangle.dsi.1", space);
#endif
ds_struct_ptr->allocated = space;
ds_struct_ptr->length = 0;
@@ -98,7 +98,7 @@
{
dyn_string_t result;
#ifdef RETURN_ON_ALLOCATION_FAILURE
- result = (dyn_string_t) malloc (sizeof (struct dyn_string));
+ result = (dyn_string_t) malloc ("demangle.dsn.1", sizeof (struct dyn_string));
if (result == NULL)
return NULL;
if (!dyn_string_init (result, space))
@@ -107,7 +107,7 @@
return NULL;
}
#else
- result = (dyn_string_t) malloc (sizeof (struct dyn_string));
+ result = (dyn_string_t) malloc ("demangle.dsn.2", sizeof (struct dyn_string));
dyn_string_init (result, space);
#endif
return result;
@@ -167,14 +167,14 @@
ds->allocated = new_allocated;
/* We actually need more space. */
#ifdef RETURN_ON_ALLOCATION_FAILURE
- ds->s = (char *) realloc (ds->s, ds->allocated);
+ ds->s = (char *) realloc ("demangle.dsr.1", ds->s, ds->allocated);
if (ds->s == NULL)
{
free (ds);
return NULL;
}
#else
- ds->s = (char *) realloc (ds->s, ds->allocated);
+ ds->s = (char *) realloc ("demangle.dsr.2", ds->s, ds->allocated);
#endif
}
Modified: branches/YARD/coregrind/m_errormgr.c
===================================================================
--- branches/YARD/coregrind/m_errormgr.c 2008-09-06 19:29:04 UTC (rev 8571)
+++ branches/YARD/coregrind/m_errormgr.c 2008-09-06 19:34:12 UTC (rev 8572)
@@ -600,7 +600,7 @@
*/
/* copy main part */
- p = VG_(arena_malloc)(VG_AR_ERRORS, sizeof(Error));
+ p = VG_(arena_malloc)(VG_AR_ERRORS, "errormgr.mre.1", sizeof(Error));
*p = err;
/* update 'extra' */
@@ -618,7 +618,7 @@
/* copy block pointed to by 'extra', if there is one */
if (NULL != p->extra && 0 != extra_size) {
- void* new_extra = VG_(malloc)(extra_size);
+ void* new_extra = VG_(malloc)("errormgr.mre.2", extra_size);
VG_(memcpy)(new_extra, p->extra, extra_size);
p->extra = new_extra;
}
@@ -979,7 +979,8 @@
while (True) {
/* Assign and initialise the two suppression halves (core and tool) */
Supp* supp;
- supp = VG_(arena_malloc)(VG_AR_CORE, sizeof(Supp));
+ supp = VG_(arena_malloc)(VG_AR_CORE, "errormgr.losf.1",
+ sizeof(Supp));
supp->count = 0;
// Initialise temporary reading-in buffer.
@@ -999,7 +1000,7 @@
if (eof || VG_STREQ(buf, "}")) BOMB("unexpected '}'");
- supp->sname = VG_(arena_strdup)(VG_AR_CORE, buf);
+ supp->sname = VG_(arena_strdup)(VG_AR_CORE, "errormgr.losf.2", buf);
eof = VG_(get_line) ( fd, buf, N_BUF );
@@ -1069,7 +1070,8 @@
BOMB("too many callers in stack trace");
if (i > 0 && i >= VG_(clo_backtrace_size))
break;
- tmp_callers[i].name = VG_(arena_strdup)(VG_AR_CORE, buf);
+ tmp_callers[i].name = VG_(arena_strdup)(VG_AR_CORE,
+ "errormgr.losf.3", buf);
if (!setLocationTy(&(tmp_callers[i])))
BOMB("location should start with 'fun:' or 'obj:'");
i++;
@@ -1085,7 +1087,8 @@
// Copy tmp_callers[] into supp->callers[]
supp->n_callers = i;
- supp->callers = VG_(arena_malloc)(VG_AR_CORE, i*sizeof(SuppLoc));
+ supp->callers = VG_(arena_malloc)(VG_AR_CORE, "errormgr.losf.4",
+ i*sizeof(SuppLoc));
for (i = 0; i < supp->n_callers; i++) {
supp->callers[i] = tmp_callers[i];
}
Modified: branches/YARD/coregrind/m_execontext.c
===================================================================
--- branches/YARD/coregrind/m_execontext.c 2008-09-06 19:29:04 UTC (rev 8571)
+++ branches/YARD/coregrind/m_execontext.c 2008-09-06 19:34:12 UTC (rev 8572)
@@ -135,7 +135,7 @@
ec_htab_size_idx = 0;
ec_htab_size = ec_primes[ec_htab_size_idx];
- ec_htab = VG_(arena_malloc)(VG_AR_EXECTXT,
+ ec_htab = VG_(arena_malloc)(VG_AR_EXECTXT, "execontext.iEs1",
sizeof(ExeContext*) * ec_htab_size);
for (i = 0; i < ec_htab_size; i++)
ec_htab[i] = NULL;
@@ -260,7 +260,7 @@
return; /* out of primes - can't resize further */
new_size = ec_primes[ec_htab_size_idx + 1];
- new_ec_htab = VG_(arena_malloc)(VG_AR_EXECTXT,
+ new_ec_htab = VG_(arena_malloc)(VG_AR_EXECTXT, "execontext.reh1",
sizeof(ExeContext*) * new_size);
VG_(debugLog)(
@@ -395,7 +395,7 @@
/* Bummer. We have to allocate a new context record. */
ec_totstored++;
- new_ec = VG_(arena_malloc)( VG_AR_EXECTXT,
+ new_ec = VG_(arena_malloc)( VG_AR_EXECTXT, "execontext.rEw2.2",
sizeof(struct _ExeContext)
+ n_ips * sizeof(Addr) );
Modified: branches/YARD/coregrind/m_hashtable.c
===================================================================
--- branches/YARD/coregrind/m_hashtable.c 2008-09-06 19:29:04 UTC (rev 8571)
+++ branches/YARD/coregrind/m_hashtable.c 2008-09-06 19:34:12 UTC (rev 8572)
@@ -69,8 +69,9 @@
/* Initialises to zero, ie. all entries NULL */
SizeT n_chains = primes[0];
SizeT sz = n_chains * sizeof(VgHashNode*);
- VgHashTable table = VG_(calloc)(1, sizeof(struct _VgHashTable));
- table->chains = VG_(calloc)(1, sz);
+ VgHashTable table = VG_(calloc)("hashtable.Hc.1",
+ 1, sizeof(struct _VgHashTable));
+ table->chains = VG_(calloc)("hashtable.Hc.2", 1, sz);
table->n_chains = n_chains;
table->n_elements = 0;
table->iterOK = True;
@@ -119,7 +120,7 @@
table->n_chains = new_chains;
sz = new_chains * sizeof(VgHashNode*);
- chains = VG_(calloc)(1, sz);
+ chains = VG_(calloc)("hashtable.resize.1", 1, sz);
for (i = 0; i < old_chains; i++) {
node = table->chains[i];
@@ -209,7 +210,7 @@
if (*n_elems == 0)
return NULL;
- arr = VG_(malloc)( *n_elems * sizeof(VgHashNode*) );
+ arr = VG_(malloc)( "hashtable.Hta.1", *n_elems * sizeof(VgHashNode*) );
j = 0;
for (i = 0; i < table->n_chains; i++) {
Modified: branches/YARD/coregrind/m_initimg/initimg-linux.c
===================================================================
--- branches/YARD/coregrind/m_initimg/initimg-linux.c 2008-09-06 19:29:04 UTC (rev 8571)
+++ branches/YARD/coregrind/m_initimg/initimg-linux.c 2008-09-06 19:34:12 UTC (rev 8572)
@@ -241,12 +241,13 @@
Int preload_tool_path_len = vglib_len + VG_(strlen)(toolname)
+ sizeof(VG_PLATFORM) + 16;
Int preload_string_len = preload_core_path_len + preload_tool_path_len;
- HChar* preload_string = VG_(malloc)(preload_string_len);
+ HChar* preload_string = VG_(malloc)("initimg-linux.sce.1",
+ preload_string_len);
vg_assert(preload_string);
/* Determine if there's a vgpreload_<tool>.so file, and setup
preload_string. */
- preload_tool_path = VG_(malloc)(preload_tool_path_len);
+ preload_tool_path = VG_(malloc)("initimg-linux.sce.2", preload_tool_path_len);
vg_assert(preload_tool_path);
VG_(snprintf)(preload_tool_path, preload_tool_path_len,
"%s/%s/vgpreload_%s.so", VG_(libdir), VG_PLATFORM, toolname);
@@ -268,7 +269,8 @@
envc++;
/* Allocate a new space */
- ret = VG_(malloc) (sizeof(HChar *) * (envc+1+1)); /* 1 new entry + NULL */
+ ret = VG_(malloc) ("initimg-linux.sce.3",
+ sizeof(HChar *) * (envc+1+1)); /* 1 new entry + NULL */
vg_assert(ret);
/* copy it over */
@@ -282,7 +284,7 @@
for (cpp = ret; cpp && *cpp; cpp++) {
if (VG_(memcmp)(*cpp, ld_preload, ld_preload_len) == 0) {
Int len = VG_(strlen)(*cpp) + preload_string_len;
- HChar *cp = VG_(malloc)(len);
+ HChar *cp = VG_(malloc)("initimg-linux.sce.4", len);
vg_assert(cp);
VG_(snprintf)(cp, len, "%s%s:%s",
@@ -297,7 +299,7 @@
/* Add the missing bits */
if (!ld_preload_done) {
Int len = ld_preload_len + preload_string_len;
- HChar *cp = VG_(malloc) (len);
+ HChar *cp = VG_(malloc) ("initimg-linux.sce.5", len);
vg_assert(cp);
VG_(snprintf)(cp, len, "%s%s", ld_preload, preload_string);
Modified: branches/YARD/coregrind/m_libcproc.c
===================================================================
--- branches/YARD/coregrind/m_libcproc.c 2008-09-06 19:29:04 UTC (rev 8571)
+++ branches/YARD/coregrind/m_libcproc.c 2008-09-06 19:34:12 UTC (rev 8572)
@@ -89,7 +89,8 @@
Char **env = (*envp);
Char **cpp;
Int len = VG_(strlen)(varname);
- Char *valstr = VG_(arena_malloc)(VG_AR_CORE, len + VG_(strlen)(val) + 2);
+ Char *valstr = VG_(arena_malloc)(VG_AR_CORE, "libcproc.es.1",
+ len + VG_(strlen)(val) + 2);
Char **oldenv = NULL;
VG_(sprintf)(valstr, "%s=%s", varname, val);
@@ -102,7 +103,7 @@
}
if (env == NULL) {
- env = VG_(arena_malloc)(VG_AR_CORE, sizeof(Char **) * 2);
+ env = VG_(arena_malloc)(VG_AR_CORE, "libcproc.es.2", sizeof(Char **) * 2);
env[0] = valstr;
env[1] = NULL;
@@ -110,7 +111,8 @@
} else {
Int envlen = (cpp-env) + 2;
- Char **newenv = VG_(arena_malloc)(VG_AR_CORE, envlen * sizeof(Char **));
+ Char **newenv = VG_(arena_malloc)(VG_AR_CORE, "libcproc.es.3",
+ envlen * sizeof(Char **));
for (cpp = newenv; *env; )
*cpp++ = *env++;
@@ -203,7 +205,8 @@
ld_library_path_str = &envp[i][16];
}
- buf = VG_(arena_malloc)(VG_AR_CORE, VG_(strlen)(VG_(libdir)) + 20);
+ buf = VG_(arena_malloc)(VG_AR_CORE, "libcproc.erves.1",
+ VG_(strlen)(VG_(libdir)) + 20);
// Remove Valgrind-specific entries from LD_*.
VG_(sprintf)(buf, "%s*/vgpreload_*.so", VG_(libdir));
@@ -253,7 +256,8 @@
envlen = oldenvp - oldenv + 1;
- newenv = VG_(arena_malloc)(VG_AR_CORE, envlen * sizeof(Char **));
+ newenv = VG_(arena_malloc)(VG_AR_CORE, "libcproc.ec.1",
+ envlen * sizeof(Char **));
oldenvp = oldenv;
newenvp = newenv;
Modified: branches/YARD/coregrind/m_main.c
===================================================================
--- branches/YARD/coregrind/m_main.c 2008-09-06 19:29:04 UTC (rev 8571)
+++ branches/YARD/coregrind/m_main.c 2008-09-06 19:34:12 UTC (rev 8572)
@@ -170,6 +170,7 @@
" --debug-dump=frames mimic /usr/bin/readelf --debug-dump=frames\n"
" --trace-redir=no|yes show redirection details? [no]\n"
" --trace-sched=no|yes show thread scheduler details? [no]\n"
+" --profile-heap=no|yes profile Valgrind's own space use\n"
" --wait-for-gdb=yes|no pause on startup to wait for gdb attach\n"
" --sym-offsets=yes|no show syms in form 'name+offset' ? [no]\n"
" --read-var-info=yes|no read variable type & location info? [no]\n"
@@ -364,7 +365,7 @@
// wouldn't disappear on them.)
if (0)
VG_(printf)("tool-specific arg: %s\n", arg);
- arg = VG_(strdup)(arg + toolname_len + 1);
+ arg = VG_(strdup)("main.mpclo.1", arg + toolname_len + 1);
arg[0] = '-';
arg[1] = '-';
@@ -419,7 +420,7 @@
else VG_BOOL_CLO(arg, "--trace-redir", VG_(clo_trace_redir))
else VG_BOOL_CLO(arg, "--trace-syscalls", VG_(clo_trace_syscalls))
- else VG_BOOL_CLO(arg, "--trace-pthreads", VG_(clo_trace_pthreads))
+ else VG_BOOL_CLO(arg, "--profile-heap", VG_(clo_profile_heap))
else VG_BOOL_CLO(arg, "--wait-for-gdb", VG_(clo_wait_for_gdb))
else VG_STR_CLO (arg, "--db-command", VG_(clo_db_command))
else VG_STR_CLO (arg, "--sim-hints", VG_(clo_sim_hints))
@@ -595,6 +596,8 @@
VG_(clo_track_fds) = False;
/* Disable timestamped output */
VG_(clo_time_stamp) = False;
+ /* Disable heap profiling, since that prints lots of stuff. */
+ VG_(clo_profile_heap) = False;
/* Also, we want to set options for the leak checker, but that
will have to be done in Memcheck's flag-handling code, not
here. */
@@ -710,7 +713,7 @@
the default one. */
static const Char default_supp[] = "default.supp";
Int len = VG_(strlen)(VG_(libdir)) + 1 + sizeof(default_supp);
- Char *buf = VG_(arena_malloc)(VG_AR_CORE, len);
+ Char *buf = VG_(arena_malloc)(VG_AR_CORE, "main.mpclo.2", len);
VG_(sprintf)(buf, "%s/%s", VG_(libdir), default_supp);
VG_(clo_suppressions)[VG_(clo_n_suppressions)] = buf;
VG_(clo_n_suppressions)++;
@@ -1151,7 +1154,7 @@
n_starts = 1;
while (True) {
- starts = VG_(malloc)( n_starts * sizeof(Addr) );
+ starts = VG_(malloc)( "main.gss.1", n_starts * sizeof(Addr) );
if (starts == NULL)
break;
r = VG_(am_get_segment_starts)( starts, n_starts );
@@ -1330,7 +1333,7 @@
// free pair right now to check that nothing is broken.
//--------------------------------------------------------------
VG_(debugLog)(1, "main", "Starting the dynamic memory manager\n");
- { void* p = VG_(malloc)( 12345 );
+ { void* p = VG_(malloc)( "main.vm.1", 12345 );
if (p) VG_(free)( p );
}
VG_(debugLog)(1, "main", "Dynamic memory manager is running\n");
@@ -2076,6 +2079,14 @@
if (VG_(clo_verbosity) > 1)
print_all_stats();
+ /* Show a profile of the heap(s) at shutdown. Optionally, first
+ throw away all the debug info, as that makes it easy to spot
+ leaks in the debuginfo reader. */
+ if (VG_(clo_profile_heap)) {
+ if (0) VG_(di_discard_ALL_debuginfo)();
+ VG_(print_arena_cc_analysis)();
+ }
+
if (VG_(clo_profile_flags) > 0) {
#define N_MAX 200
BBProfEntry tops[N_MAX];
Modified: branches/YARD/coregrind/m_options.c
===================================================================
--- branches/YARD/coregrind/m_options.c 2008-09-06 19:29:04 UTC (rev 8571)
+++ branches/YARD/coregrind/m_options.c 2008-09-06 19:34:12 UTC (rev 8572)
@@ -75,7 +75,7 @@
Bool VG_(clo_debug_dump_frames) = False;
Bool VG_(clo_trace_redir) = False;
Bool VG_(clo_trace_sched) = False;
-Bool VG_(clo_trace_pthreads) = False;
+Bool VG_(clo_profile_heap) = False;
Int VG_(clo_dump_error) = 0;
Int VG_(clo_backtrace_size) = 12;
Char* VG_(clo_sim_hints) = NULL;
@@ -153,7 +153,7 @@
// The 10 is slop, it should be enough in most cases.
len = j + VG_(strlen)(format) + 10;
- out = VG_(malloc)( len );
+ out = VG_(malloc)( "options.efn.1", len );
if (format[0] != '/') {
VG_(strcpy)(out, base_dir);
out[j++] = '/';
@@ -162,7 +162,7 @@
#define ENSURE_THIS_MUCH_SPACE(x) \
if (j + x >= len) { \
len += (10 + x); \
- out = VG_(realloc)(out, len); \
+ out = VG_(realloc)("options.efn.2(multiple)", out, len); \
}
while (format[i]) {
@@ -240,7 +240,8 @@
bad: {
Char* opt = // 2: 1 for the '=', 1 for the NUL.
- VG_(malloc)( VG_(strlen)(option_name) + VG_(strlen)(format) + 2 );
+ VG_(malloc)( "options.efn.3",
+ VG_(strlen)(option_name) + VG_(strlen)(format) + 2 );
VG_(strcpy)(opt, option_name);
VG_(strcat)(opt, "=");
VG_(strcat)(opt, format);
Modified: branches/YARD/coregrind/m_oset.c
===================================================================
--- branches/YARD/coregrind/m_oset.c 2008-09-06 19:29:04 UTC (rev 8571)
+++ branches/YARD/coregrind/m_oset.c 2008-09-06 19:34:12 UTC (rev 8572)
@@ -112,6 +112,7 @@
SizeT keyOff; // key offset
OSetCmp_t cmp; // compare a key and an element, or NULL
OSetAlloc_t alloc; // allocator
+ HChar* cc; // cc for allocator
OSetFree_t free; // deallocator
Word nElems; // number of elements in the tree
AvlNode* root; // root node
@@ -282,7 +283,8 @@
// The underscores avoid GCC complaints about overshadowing global names.
AvlTree* VG_(OSetGen_Create)(OffT _keyOff, OSetCmp_t _cmp,
- OSetAlloc_t _alloc, OSetFree_t _free)
+ OSetAlloc_t _alloc, HChar* _cc,
+ OSetFree_t _free)
{
AvlTree* t;
@@ -294,10 +296,11 @@
vg_assert(_free);
if (!_cmp) vg_assert(0 == _keyOff); // If no cmp, offset must be zero
- t = _alloc(sizeof(AvlTree));
+ t = _alloc(_cc, sizeof(AvlTree));
t->keyOff = _keyOff;
t->cmp = _cmp;
t->alloc = _alloc;
+ t->cc = _cc;
t->free = _free;
t->nElems = 0;
t->root = NULL;
@@ -306,9 +309,10 @@
return t;
}
-AvlTree* VG_(OSetWord_Create)(OSetAlloc_t _alloc, OSetFree_t _free)
+AvlTree* VG_(OSetWord_Create)(OSetAlloc_t _alloc, HChar* _cc,
+ OSetFree_t _free)
{
- return VG_(OSetGen_Create)(/*keyOff*/0, /*cmp*/NULL, _alloc, _free);
+ return VG_(OSetGen_Create)(/*keyOff*/0, /*cmp*/NULL, _alloc, _cc, _free);
}
// Destructor, frees up all memory held by remaining nodes.
@@ -356,7 +360,7 @@
void* VG_(OSetGen_AllocNode)(AvlTree* t, SizeT elemSize)
{
Int nodeSize = sizeof(AvlNode) + elemSize;
- AvlNode* n = t->alloc( nodeSize );
+ AvlNode* n = t->alloc( t->cc, nodeSize );
vg_assert(elemSize > 0);
VG_(memset)(n, 0, nodeSize);
n->magic = OSET_MAGIC;
Modified: branches/YARD/coregrind/m_redir.c
===================================================================
--- branches/YARD/coregrind/m_redir.c 2008-09-06 19:29:04 UTC (rev 8571)
+++ branches/YARD/coregrind/m_redir.c 2008-09-06 19:34:12 UTC (rev 8572)
@@ -280,9 +280,9 @@
static void maybe_add_active ( Active /*by value; callee copies*/ );
-static void* dinfo_zalloc(SizeT);
+static void* dinfo_zalloc(HChar* ec, SizeT);
static void dinfo_free(void*);
-static HChar* dinfo_strdup(HChar*);
+static HChar* dinfo_strdup(HChar* ec, HChar*);
static Bool is_plausible_guest_addr(Addr);
static Bool is_aix5_glink_idiom(Addr);
@@ -369,10 +369,10 @@
the following loop, and complain at that point. */
continue;
}
- spec = dinfo_zalloc(sizeof(Spec));
+ spec = dinfo_zalloc("redir.rnnD.1", sizeof(Spec));
vg_assert(spec);
- spec->from_sopatt = dinfo_strdup(demangled_sopatt);
- spec->from_fnpatt = dinfo_strdup(demangled_fnpatt);
+ spec->from_sopatt = dinfo_strdup("redir.rnnD.2", demangled_sopatt);
+ spec->from_fnpatt = dinfo_strdup("redir.rnnD.3", demangled_fnpatt);
vg_assert(spec->from_sopatt);
vg_assert(spec->from_fnpatt);
spec->to_addr = sym_addr;
@@ -418,7 +418,7 @@
/* Ok. Now specList holds the list of specs from the DebugInfo.
Build a new TopSpec, but don't add it to topSpecs yet. */
- newts = dinfo_zalloc(sizeof(TopSpec));
+ newts = dinfo_zalloc("redir.rnnD.4", sizeof(TopSpec));
vg_assert(newts);
newts->next = NULL; /* not significant */
newts->seginfo = newsi;
@@ -691,7 +691,7 @@
/* Traverse the actives, copying the addresses of those we intend
to delete into tmpSet. */
- tmpSet = VG_(OSetWord_Create)(dinfo_zalloc, dinfo_free);
+ tmpSet = VG_(OSetWord_Create)(dinfo_zalloc, "redir.rndD.1", dinfo_free);
ts->mark = True;
@@ -809,11 +809,11 @@
Addr to_addr,
const HChar* const mandatory )
{
- Spec* spec = dinfo_zalloc(sizeof(Spec));
+ Spec* spec = dinfo_zalloc("redir.ahs.1", sizeof(Spec));
vg_assert(spec);
if (topSpecs == NULL) {
- topSpecs = dinfo_zalloc(sizeof(TopSpec));
+ topSpecs = dinfo_zalloc("redir.ahs.2", sizeof(TopSpec));
vg_assert(topSpecs);
/* symtab_zalloc sets all fields to zero */
}
@@ -851,6 +851,7 @@
activeSet = VG_(OSetGen_Create)(offsetof(Active, from_addr),
NULL, // Use fast comparison
dinfo_zalloc,
+ "redir.ri.1",
dinfo_free);
// The rest of this function just adds initial Specs.
@@ -970,10 +971,10 @@
/*--- MISC HELPERS ---*/
/*------------------------------------------------------------*/
-static void* dinfo_zalloc(SizeT n) {
+static void* dinfo_zalloc(HChar* ec, SizeT n) {
void* p;
vg_assert(n > 0);
- p = VG_(arena_malloc)(VG_AR_DINFO, n);
+ p = VG_(arena_malloc)(VG_AR_DINFO, ec, n);
tl_assert(p);
VG_(memset)(p, 0, n);
return p;
@@ -984,9 +985,9 @@
return VG_(arena_free)(VG_AR_DINFO, p);
}
-static HChar* dinfo_strdup(HChar* str)
+static HChar* dinfo_strdup(HChar* ec, HChar* str)
{
- return VG_(arena_strdup)(VG_AR_DINFO, str);
+ return VG_(arena_strdup)(VG_AR_DINFO, ec, str);
}
/* Really this should be merged with translations_allowable_from_seg
Modified: branches/YARD/coregrind/m_replacemalloc/replacemalloc_core.c
===================================================================
--- branches/YARD/coregrind/m_replacemalloc/replacemalloc_core.c 2008-09-06 19:29:04 UTC (rev 8571)
+++ branches/YARD/coregrind/m_replacemalloc/replacemalloc_core.c 2008-09-06 19:34:12 UTC (rev 8572)
@@ -98,9 +98,11 @@
// 'align' should be valid (ie. big enough and a power of two) by now.
// VG_(arena_memalign)() will abort if it's not.
if (VG_MIN_MALLOC_SZB == align)
- return VG_(arena_malloc) ( VG_AR_CLIENT, nbytes );
+ return VG_(arena_malloc) ( VG_AR_CLIENT, "replacemalloc.cm.1",
+ nbytes );
else
- return VG_(arena_memalign) ( VG_AR_CLIENT, align, nbytes );
+ return VG_(arena_memalign) ( VG_AR_CLIENT, "replacemalloc.cm.2",
+ align, nbytes );
}
void VG_(cli_free) ( void* p )
Modified: branches/YARD/coregrind/m_signals.c
===================================================================
--- branches/YARD/coregrind/m_signals.c 2008-09-06 19:29:04 UTC (rev 8571)
+++ branches/YARD/coregrind/m_signals.c 2008-09-06 19:34:12 UTC (rev 8572)
@@ -1528,7 +1528,8 @@
block_all_host_signals(&savedmask);
if (tst->sig_queue == NULL) {
- tst->sig_queue = VG_(arena_malloc)(VG_AR_CORE, sizeof(*tst->sig_queue));
+ tst->sig_queue = VG_(arena_malloc)(VG_AR_CORE, "signals.qs.1",
+ sizeof(*tst->sig_queue));
VG_(memset)(tst->sig_queue, 0, sizeof(*tst->sig_queue));
}
sq = tst->sig_queue;
Modified: branches/YARD/coregrind/m_stacks.c
===================================================================
--- branches/YARD/coregrind/m_stacks.c 2008-09-06 19:29:04 UTC (rev 8571)
+++ branches/YARD/coregrind/m_stacks.c 2008-09-06 19:34:12 UTC (rev 8572)
@@ -185,7 +185,7 @@
start = t;
}
- i = (Stack *)VG_(arena_malloc)(VG_AR_CORE, sizeof(Stack));
+ i = (Stack *)VG_(arena_malloc)(VG_AR_CORE, "stacks.rs.1", sizeof(Stack));
i->start = start;
i->end = end;
i->id = next_id++;
Modified: branches/YARD/coregrind/m_syswrap/syswrap-generic.c
===================================================================
--- branches/YARD/coregrind/m_syswrap/syswrap-generic.c 2008-09-06 19:29:04 UTC (rev 8571)
+++ branches/YARD/coregrind/m_syswrap/syswrap-generic.c 2008-09-06 19:34:12 UTC (rev 8572)
@@ -539,7 +539,7 @@
/* Not already one: allocate an OpenFd */
if (i == NULL) {
- i = VG_(arena_malloc)(VG_AR_CORE, sizeof(OpenFd));
+ i = VG_(arena_malloc)(VG_AR_CORE, "syswrap.rfdowgn.1", sizeof(OpenFd));
i->prev = NULL;
i->next = allocated_fds;
@@ -549,7 +549,7 @@
}
i->fd = fd;
- i->pathname = VG_(arena_strdup)(VG_AR_CORE, pathname);
+ i->pathname = VG_(arena_strdup)(VG_AR_CORE, "syswrap.rfdowgn.2", pathname);
i->where = (tid == -1) ? NULL : VG_(record_ExeContext)(tid, 0/*first_ip_delta*/);
}
@@ -752,10 +752,10 @@
}
static
-Char *strdupcat ( const Char *s1, const Char *s2, ArenaId aid )
+Char *strdupcat ( HChar* cc, const Char *s1, const Char *s2, ArenaId aid )
{
UInt len = VG_(strlen) ( s1 ) + VG_(strlen) ( s2 ) + 1;
- Char *result = VG_(arena_malloc) ( aid, len );
+ Char *result = VG_(arena_malloc) ( aid, cc, len );
VG_(strcpy) ( result, s1 );
VG_(strcat) ( result, s2 );
return result;
@@ -765,7 +765,8 @@
void pre_mem_read_sendmsg ( ThreadId tid, Bool read,
Char *msg, Addr base, SizeT size )
{
- Char *outmsg = strdupcat ( "socketcall.sendmsg", msg, VG_AR_CORE );
+ Char *outmsg = strdupcat ( "di.syswrap.pmrs.1",
+ "socketcall.sendmsg", msg, VG_AR_CORE );
PRE_MEM_READ( outmsg, base, size );
VG_(arena_free) ( VG_AR_CORE, outmsg );
}
@@ -774,7 +775,8 @@
void pre_mem_write_recvmsg ( ThreadId tid, Bool read,
Char *msg, Addr base, SizeT size )
{
- Char *outmsg = strdupcat ( "socketcall.recvmsg", msg, VG_AR_CORE );
+ Char *outmsg = strdupcat ( "di.syswrap.pmwr.1",
+ "socketcall.recvmsg", msg, VG_AR_CORE );
if ( read )
PRE_MEM_READ( outmsg, base, size );
else
@@ -866,7 +868,7 @@
/* NULL/zero-length sockaddrs are legal */
if ( sa == NULL || salen == 0 ) return;
- outmsg = VG_(arena_malloc) ( VG_AR_CORE,
+ outmsg = VG_(arena_malloc) ( VG_AR_CORE, "di.syswrap.pmr_sockaddr.1",
VG_(strlen)( description ) + 30 );
VG_(sprintf) ( outmsg, description, ".sa_family" );
@@ -2553,7 +2555,8 @@
tot_args++;
}
// allocate
- argv = VG_(malloc)( (tot_args+1) * sizeof(HChar*) );
+ argv = VG_(malloc)( "di.syswrap.pre_sys_execve.1",
+ (tot_args+1) * sizeof(HChar*) );
if (argv == 0) goto hosed;
// copy
j = 0;
Modified: branches/YARD/coregrind/m_syswrap/syswrap-x86-linux.c
===================================================================
--- branches/YARD/coregrind/m_syswrap/syswrap-x86-linux.c 2008-09-06 19:29:04 UTC (rev 8571)
+++ branches/YARD/coregrind/m_syswrap/syswrap-x86-linux.c 2008-09-06 19:34:12 UTC (rev 8572)
@@ -444,14 +444,14 @@
static VexGuestX86SegDescr* alloc_zeroed_x86_GDT ( void )
{
Int nbytes = VEX_GUEST_X86_GDT_NENT * sizeof(VexGuestX86SegDescr);
- return VG_(arena_calloc)(VG_AR_CORE, nbytes, 1);
+ return VG_(arena_calloc)(VG_AR_CORE, "di.syswrap-x86.azxG.1", nbytes, 1);
}
/* Create a zeroed-out LDT. */
static VexGuestX86SegDescr* alloc_zeroed_x86_LDT ( void )
{
Int nbytes = VEX_GUEST_X86_LDT_NENT * sizeof(VexGuestX86SegDescr);
- return VG_(arena_calloc)(VG_AR_CORE, nbytes, 1);
+ return VG_(arena_calloc)(VG_AR_CORE, "di.syswrap-x86.azxL.1", nbytes, 1);
}
/* Free up an LDT or GDT allocated by the above fns. */
Modified: branches/YARD/coregrind/m_transtab.c
===================================================================
--- branches/YARD/coregrind/m_transtab.c 2008-09-06 19:29:04 UTC (rev 8571)
+++ branches/YARD/coregrind/m_transtab.c 2008-09-06 19:34:12 UTC (rev 8572)
@@ -384,7 +384,8 @@
old_sz = sec->ec2tte_size[ec];
old_ar = sec->ec2tte[ec];
new_sz = old_sz==0 ? 8 : old_sz<64 ? 2*old_sz : (3*old_sz)/2;
- new_ar = VG_(arena_malloc)(VG_AR_TTAUX, new_sz * sizeof(UShort));
+ new_ar = VG_(arena_malloc)(VG_AR_TTAUX, "transtab.aECN.1",
+ new_sz * sizeof(UShort));
for (i = 0; i < old_sz; i++)
new_ar[i] = old_ar[i];
if (old_ar)
Modified: branches/YARD/coregrind/m_ume.c
===================================================================
--- branches/YARD/coregrind/m_ume.c 2008-09-06 19:29:04 UTC (rev 8571)
+++ branches/YARD/coregrind/m_ume.c 2008-09-06 19:34:12 UTC (rev 8572)
@@ -120,7 +120,7 @@
struct elfinfo *readelf(Int fd, const char *filename)
{
SysRes sres;
- struct elfinfo *e = VG_(malloc)(sizeof(*e));
+ struct elfinfo *e = VG_(malloc)("ume.re.1", sizeof(*e));
Int phsz;
vg_assert(e);
@@ -163,7 +163,7 @@
}
phsz = sizeof(ESZ(Phdr)) * e->e.e_phnum;
- e->p = VG_(malloc)(phsz);
+ e->p = VG_(malloc)("ume.re.2", phsz);
vg_assert(e->p);
sres = VG_(pread)(fd, e->p, phsz, e->e.e_phoff);
@@ -378,7 +378,7 @@
break;
case PT_INTERP: {
- char *buf = VG_(malloc)(ph->p_filesz+1);
+ HChar *buf = VG_(malloc)("ume.LE.1", ph->p_filesz+1);
Int j;
Int intfd;
Int baseaddr_set;
@@ -613,10 +613,10 @@
*cp = '\0';
}
- info->interp_name = VG_(strdup)(interp);
+ info->interp_name = VG_(strdup)("ume.ls.1", interp);
vg_assert(NULL != info->interp_name);
if (arg != NULL && *arg != '\0') {
- info->interp_args = VG_(strdup)(arg);
+ info->interp_args = VG_(strdup)("ume.ls.2", arg);
vg_assert(NULL != info->interp_args);
}
@@ -788,7 +788,7 @@
// Looks like a script. Run it with /bin/sh. This includes
// zero-length files.
- info->interp_name = VG_(strdup)(default_interp_name);
+ info->interp_name = VG_(strdup)("ume.desf.1", default_interp_name);
info->interp_args = NULL;
if (info->argv && info->argv[0] != NULL)
info->argv[0] = (char *)exe_name;
Modified: branches/YARD/coregrind/m_wordfm.c
===================================================================
--- branches/YARD/coregrind/m_wordfm.c 2008-09-06 19:29:04 UTC (rev 8571)
+++ branches/YARD/coregrind/m_wordfm.c 2008-09-06 19:34:12 UTC (rev 8572)
@@ -81,7 +81,8 @@
struct _WordFM {
AvlNode* root;
- void* (*alloc_nofail)( SizeT );
+ void* (*alloc_nofail)( HChar*, SizeT );
+ HChar* cc;
void (*dealloc)(void*);
Word (*kCmp)(UWord,UWord);
AvlNode* nodeStack[WFM_STKMAX]; // Iterator node stack
@@ -459,12 +460,13 @@
AvlNode* avl_dopy ( AvlNode* nd,
UWord(*dopyK)(UWord),
UWord(*dopyV)(UWord),
- void*(alloc_nofail)(SizeT) )
+ void*(alloc_nofail)(HChar*,SizeT),
+ HChar* cc )
{
AvlNode* nyu;
if (! nd)
return NULL;
- nyu = alloc_nofail(sizeof(AvlNode));
+ nyu = alloc_nofail(cc, sizeof(AvlNode));
tl_assert(nyu);
nyu->child[0] = nd->child[0];
@@ -493,12 +495,14 @@
/* Copy subtrees */
if (nyu->child[0]) {
- nyu->child[0] = avl_dopy( nyu->child[0], dopyK, dopyV, alloc_nofail );
+ nyu->child[0] = avl_dopy( nyu->child[0], dopyK, dopyV,
+ alloc_nofail, cc );
if (! nyu->child[0])
return NULL;
}
if (nyu->child[1]) {
- nyu->child[1] = avl_dopy( nyu->child[1], dopyK, dopyV, alloc_nofail );
+ nyu->child[1] = avl_dopy( nyu->child[1], dopyK, dopyV,
+ alloc_nofail, cc );
if (! nyu->child[1])
return NULL;
}
@@ -508,13 +512,15 @@
/* Initialise a WordFM. */
static void initFM ( WordFM* fm,
- void* (*alloc_nofail)( SizeT ),
+ void* (*alloc_nofail)( HChar*, SizeT ),
+ HChar* cc,
void (*dealloc)(void*),
Word (*kCmp)(UWord,UWord) )
{
fm->root = 0;
fm->kCmp = kCmp;
fm->alloc_nofail = alloc_nofail;
+ fm->cc = cc;
fm->dealloc = dealloc;
fm->stackTop = 0;
}
@@ -528,13 +534,14 @@
sections of the map, or the whole thing. If kCmp is NULL then the
ordering used is unsigned word ordering (UWord) on the key
values. */
-WordFM* VG_(newFM) ( void* (*alloc_nofail)( SizeT ),
+WordFM* VG_(newFM) ( void* (*alloc_nofail)( HChar*, SizeT ),
+ HChar* cc,
void (*dealloc)(void*),
Word (*kCmp)(UWord,UWord) )
{
- WordFM* fm = alloc_nofail(sizeof(WordFM));
+ WordFM* fm = alloc_nofail(cc, sizeof(WordFM));
tl_assert(fm);
- initFM(fm, alloc_nofail, dealloc, kCmp);
+ initFM(fm, alloc_nofail, cc, dealloc, kCmp);
return fm;
}
@@ -572,7 +579,7 @@
{
MaybeWord oldV;
AvlNode* node;
- node = fm->alloc_nofail( sizeof(struct _AvlNode) );
+ node = fm->alloc_nofail( fm->cc, sizeof(struct _AvlNode) );
node->key = k;
node->val = v;
oldV.b = False;
@@ -735,7 +742,7 @@
/* can't clone the fm whilst iterating on it */
tl_assert(fm->stackTop == 0);
- nyu = fm->alloc_nofail( sizeof(WordFM) );
+ nyu = fm->alloc_nofail( fm->cc, sizeof(WordFM) );
tl_assert(nyu);
*nyu = *fm;
@@ -745,7 +752,8 @@
VG_(memset)(fm->numStack, 0, sizeof(fm->numStack));
if (nyu->root) {
- nyu->root = avl_dopy( nyu->root, dopyK, dopyV, fm->alloc_nofail );
+ nyu->root = avl_dopy( nyu->root, dopyK, dopyV,
+ fm->alloc_nofail, fm->cc );
if (! nyu->root)
return NULL;
}
@@ -768,11 +776,12 @@
WordFM* fm;
};
-WordBag* VG_(newBag) ( void* (*alloc_nofail)( SizeT ),
+WordBag* VG_(newBag) ( void* (*alloc_nofail)( HChar*, SizeT ),
+ HChar* cc,
void (*dealloc)(void*) )
{
- WordBag* bag = alloc_nofail(sizeof(WordBag));
- bag->fm = VG_(newFM)( alloc_nofail, dealloc, NULL );
+ WordBag* bag = alloc_nofail(cc, sizeof(WordBag));
+ bag->fm = VG_(newFM)( alloc_nofail, cc, dealloc, NULL );
return bag;
}
Property changes on: branches/YARD/coregrind/m_wordfm.c
___________________________________________________________________
Name: svn:mergeinfo
-
Modified: branches/YARD/coregrind/m_xarray.c
===================================================================
--- branches/YARD/coregrind/m_xarray.c 2008-09-06 19:29:04 UTC (rev 8571)
+++ branches/YARD/coregrind/m_xarray.c 2008-09-06 19:34:12 UTC (rev 8572)
@@ -38,7 +38,8 @@
/* See pub_tool_xarray.h for details of what this is all about. */
struct _XArray {
- void* (*alloc) ( SizeT ); /* alloc fn (nofail) */
+ void* (*alloc) ( HChar*, SizeT ); /* alloc fn (nofail) */
+ HChar* cc; /* cost centre for alloc */
void (*free) ( void* ); /* free fn */
Int (*cmpFn) ( void*, void* ); /* cmp fn (may be NULL) */
Word elemSzB; /* element size in bytes */
@@ -49,7 +50,8 @@
};
-XArray* VG_(newXA) ( void*(*alloc_fn)(SizeT),
+XArray* VG_(newXA) ( void*(*alloc_fn)(HChar*,SizeT),
+ HChar* cc,
void(*free_fn)(void*),
Word elemSzB )
{
@@ -63,9 +65,10 @@
vg_assert(alloc_fn);
vg_assert(free_fn);
vg_assert(elemSzB > 0);
- xa = alloc_fn( sizeof(struct _XArray) );
+ xa = alloc_fn( cc, sizeof(struct _XArray) );
vg_assert(xa);
xa->alloc = alloc_fn;
+ xa->cc = cc;
xa->free = free_fn;
xa->cmpFn = NULL;
xa->elemSzB = elemSzB;
@@ -76,22 +79,32 @@
return xa;
}
-XArray* VG_(cloneXA)( XArray* xao )
+XArray* VG_(cloneXA)( HChar* cc, XArray* xao )
{
struct _XArray* xa = (struct _XArray*)xao;
struct _XArray* nyu;
+ HChar* nyu_cc;
vg_assert(xa);
vg_assert(xa->alloc);
vg_assert(xa->free);
vg_assert(xa->elemSzB >= 1);
- nyu = xa->alloc( sizeof(struct _XArray) );
+ nyu_cc = cc ? cc : xa->cc;
+ nyu = xa->alloc( nyu_cc, sizeof(struct _XArray) );
if (!nyu)
return NULL;
/* Copy everything verbatim ... */
*nyu = *xa;
+ nyu->cc = nyu_cc;
/* ... except we have to clone the contents-array */
if (nyu->arr) {
- nyu->arr = nyu->alloc( nyu->totsizeE * nyu->elemSzB );
+ /* Restrict the total size of the new array to its current
+ actual size. That means we don't waste space copying the
+ unused tail of the original. The tradeoff is that it
+ guarantees we will have to resize the child if even one more
+ element is later added to it, unfortunately. */
+ nyu->totsizeE = nyu->usedsizeE;
+ /* and allocate .. */
+ nyu->arr = nyu->alloc( nyu->cc, nyu->totsizeE * nyu->elemSzB );
if (!nyu->arr) {
nyu->free(nyu);
return NULL;
@@ -151,10 +164,10 @@
} else {
newsz = 2 * xa->totsizeE;
}
- if (0)
+ if (0 && xa->totsizeE >= 10000)
VG_(printf)("addToXA: increasing from %ld to %ld\n",
xa->totsizeE, newsz);
- tmp = xa->alloc(newsz * xa->elemSzB);
+ tmp = xa->alloc(xa->cc, newsz * xa->elemSzB);
vg_assert(tmp);
if (xa->usedsizeE > 0)
VG_(memcpy)(tmp, xa->arr, xa->usedsizeE * xa->elemSzB);
Modified: branches/YARD/coregrind/pub_core_debuginfo.h
===================================================================
--- branches/YARD/coregrind/pub_core_debuginfo.h 2008-09-06 19:29:04 UTC (rev 8571)
+++ branches/YARD/coregrind/pub_core_debuginfo.h 2008-09-06 19:34:12 UTC (rev 8572)
@@ -72,6 +72,8 @@
);
#endif
+extern void VG_(di_discard_ALL_debuginfo)( void );
+
extern Bool VG_(get_fnname_nodemangle)( Addr a,
Char* fnname, Int n_fnname );
Modified: branches/YARD/coregrind/pub_core_options.h
===================================================================
--- branches/YARD/coregrind/pub_core_options.h 2008-09-06 19:29:04 UTC (rev 8571)
+++ branches/YARD/coregrind/pub_core_options.h 2008-09-06 19:34:12 UTC (rev 8572)
@@ -126,10 +126,10 @@
extern Bool VG_(clo_trace_redir);
/* DEBUG: print thread scheduling events? default: NO */
extern Bool VG_(clo_trace_sched);
-/* DEBUG: print pthreads calls? default: NO */
-extern Bool VG_(clo_trace_pthreads);
-/* Display gory details for the k'th most popular error. default:
- Infinity. */
+/* DEBUG: do heap profiling? default: NO */
+extern Bool VG_(clo_profile_heap);
+/* DEBUG: display gory details for the k'th most popular error.
+ default: Infinity. */
extern Int VG_(clo_dump_error);
/* Engage miscellaneous weird hacks needed for some progs. */
extern Char* VG_(clo_sim_hints);
Modified: branches/YARD/include/pub_tool_oset.h
===================================================================
--- branches/YARD/include/pub_tool_oset.h 2008-09-06 19:29:04 UTC (rev 8571)
+++ branches/YARD/include/pub_tool_oset.h 2008-09-06 19:34:12 UTC (rev 8572)
@@ -77,7 +77,7 @@
// - Free: frees a chunk of memory allocated with Alloc.
typedef Word (*OSetCmp_t) ( const void* key, const void* elem );
-typedef void* (*OSetAlloc_t) ( SizeT szB );
+typedef void* (*OSetAlloc_t) ( HChar* ec, SizeT szB );
typedef void (*OSetFree_t) ( void* p );
/*--------------------------------------------------------------------*/
@@ -98,7 +98,8 @@
// to allow the destruction of any attached resources; if NULL it is not
// called.
-extern OSet* VG_(OSetWord_Create) ( OSetAlloc_t alloc, OSetFree_t free );
+extern OSet* VG_(OSetWord_Create) ( OSetAlloc_t alloc, HChar* ec,
+ OSetFree_t free );
extern void VG_(OSetWord_Destroy) ( OSet* os );
/*--------------------------------------------------------------------*/
@@ -183,7 +184,8 @@
// lead to assertions in Valgrind's allocator.
extern OSet* VG_(OSetGen_Create) ( OffT keyOff, OSetCmp_t cmp,
- OSetAlloc_t alloc, OSetFree_t free );
+ OSetAlloc_t alloc, HChar* ec,
+ OSetFree_t free );
extern void VG_(OSetGen_Destroy) ( OSet* os );
extern void* VG_(OSetGen_AllocNode) ( OSet* os, SizeT elemSize );
extern void VG_(OSetGen_FreeNode) ( OSet* os, void* elem );
Modified: branches/YARD/include/pub_tool_wordfm.h
===================================================================
--- branches/YARD/include/pub_tool_wordfm.h 2008-09-06 19:29:04 UTC (rev 8571)
+++ branches/YARD/include/pub_tool_wordfm.h 2008-09-06 19:34:12 UTC (rev 8572)
@@ -76,7 +76,8 @@
sections of the map, or the whole thing. If kCmp is NULL then the
ordering used is unsigned word ordering (UWord) on the key
values. */
-WordFM* VG_(newFM) ( void* (*alloc_nofail)( SizeT ),
+WordFM* VG_(newFM) ( void* (*alloc_nofail)( HChar* cc, SizeT ),
+ HChar* cc,
void (*dealloc)(void*),
Word (*kCmp)(UWord,UWord) );
@@ -139,7 +140,8 @@
typedef struct _WordBag WordBag; /* opaque */
/* Allocate and initialise a WordBag */
-WordBag* VG_(newBag) ( void* (*alloc_nofail)( SizeT ),
+WordBag* VG_(newBag) ( void* (*alloc_nofail)( HChar* cc, SizeT ),
+ HChar* cc,
void (*dealloc)(void*) );
/* Free up the Bag. */
Property changes on: branches/YARD/include/pub_tool_wordfm.h
___________________________________________________________________
Name: svn:mergeinfo
-
Modified: branches/YARD/include/pub_tool_xarray.h
===================================================================
--- branches/YARD/include/pub_tool_xarray.h 2008-09-06 19:29:04 UTC (rev 8571)
+++ branches/YARD/include/pub_tool_xarray.h 2008-09-06 19:34:12 UTC (rev 8572)
@@ -44,12 +44,13 @@
/* It's an abstract type. Bwaha. */
-typedef void XArray;
+typedef struct _XArray XArray;
/* Create new XArray, using given allocation and free function, and
for elements of the specified size. Alloc fn must not fail (that
is, if it returns it must have succeeded.) */
-extern XArray* VG_(newXA) ( void*(*alloc_fn)(SizeT),
+extern XArray* VG_(newXA) ( void*(*alloc_fn)(HChar*,SizeT),
+ HChar* cc,
void(*free_fn)(void*),
Word elemSzB );
@@ -102,8 +103,10 @@
/* Make a new, completely independent copy of the given XArray, using
the existing allocation function to allocate the new space.
Returns NULL if the allocation function didn't manage to allocate
- space (but did return NULL rather than merely abort.) */
-extern XArray* VG_(cloneXA)( XArray* xa );
+ space (but did return NULL rather than merely abort.) Space for
+ the clone (and all additions to it) is billed to 'cc' unless that
+ is NULL, in which case the parent's cost-center is used. */
+extern XArray* VG_(cloneXA)( HChar* cc, XArray* xa );
#endif // __PUB_TOOL_XARRAY_H
|
|
From: <sv...@va...> - 2008-09-06 19:28:55
|
Author: sewardj Date: 2008-09-06 20:29:04 +0100 (Sat, 06 Sep 2008) New Revision: 8571 Log: A major overhaul of the Dwarf3 variable-location and type reader (the thing in readdwarf3.c). The changes drastically reduce the amount of memory needed to read and store locations and types of variables, and in particular make it actually usable for Dwarf3 generated by gcc-4.2 and later. Unfortunately the changes also make it significantly slower (possible improvements are pending). The main changes are: * simplify the representation of all type-related entities in Dwarf3 (see priv_tytypes.h). A single unified type 'TyEnt' replaces the types TyAtom, Type, TyBounds, D3Expr, TyAdmin and probably a couple of others. * get rid of the inter-type-entity resolution process. Inter-entity references (eg, from an array type to the type of its elements and to its bounds) are left as-is. * use an iterative merging-and-substitution algorithm to get rid of type entities which are structurally identical. Because of the substitution aspect, duplication is removed even in cases where type entities refer to other entities. For large .so's, this often reduces the number of type entities that need to be stored by a factor of 10 or more. * fix a huge space leak caused by gratuitous duplication of range lists. Range lists are now not duplicated during the processing. There are many opportunities for reducing space usage still further; however this change deals effectively with the worst offenders as of gcc-4.3. Also, add cost-center annotations to all allocation points. Modified: branches/YARD/coregrind/m_debuginfo/debuginfo.c branches/YARD/coregrind/m_debuginfo/misc.c branches/YARD/coregrind/m_debuginfo/priv_d3basics.h branches/YARD/coregrind/m_debuginfo/priv_misc.h branches/YARD/coregrind/m_debuginfo/priv_storage.h branches/YARD/coregrind/m_debuginfo/priv_tytypes.h branches/YARD/coregrind/m_debuginfo/readdwarf.c branches/YARD/coregrind/m_debuginfo/readdwarf3.c branches/YARD/coregrind/m_debuginfo/readelf.c branches/YARD/coregrind/m_debuginfo/readstabs.c branches/YARD/coregrind/m_debuginfo/storage.c branches/YARD/coregrind/m_debuginfo/tytypes.c [... diff too large to include ...] |
|
From: <sv...@va...> - 2008-09-06 19:12:33
|
Author: sewardj
Date: 2008-09-06 20:12:41 +0100 (Sat, 06 Sep 2008)
New Revision: 8570
Log:
Add a heap profiling facility the dynamic memory manager:
* all allocation requests must now supply a cost-center string,
an arbitrary "const HChar*", against which the allocation is
"billed"
* each block has its cost-center string stored with it
* the heap is periodically profiled. For each arena, the
number of blocks and bytes associated with each cost-center
string is printed out.
* the heap is profiled at system shutdown, and also when the
maximum size of an Arena increases by more than 10%.
* heap profiling is only done when --profile-heap=yes is given.
* there is an additional overheap of 8 bytes per allocated block,
needed to hold a pointer to the block's cost-center string.
Overhead is 8 bytes even on 32 bit platforms, because of
alignment constraints.
These changes make it much easier to make sense of what is going on
with Valgrind's internal dynamic memory management, and to find leaks
and excessive space use.
Modified:
branches/YARD/coregrind/m_mallocfree.c
branches/YARD/coregrind/pub_core_mallocfree.h
branches/YARD/include/pub_tool_mallocfree.h
Modified: branches/YARD/coregrind/m_mallocfree.c
===================================================================
--- branches/YARD/coregrind/m_mallocfree.c 2008-09-06 18:59:27 UTC (rev 8569)
+++ branches/YARD/coregrind/m_mallocfree.c 2008-09-06 19:12:41 UTC (rev 8570)
@@ -51,6 +51,8 @@
Long VG_(free_queue_volume) = 0;
Long VG_(free_queue_length) = 0;
+static void cc_analyse_alloc_arena ( ArenaId aid ); /* fwds */
+
/*------------------------------------------------------------*/
/*--- Main types ---*/
/*------------------------------------------------------------*/
@@ -68,6 +70,7 @@
/* Layout of an in-use block:
+ cost center (sizeof(ULong) bytes)
this block total szB (sizeof(SizeT) bytes)
red zone bytes (depends on Arena.rz_szB, but >= sizeof(void*))
(payload bytes)
@@ -76,6 +79,7 @@
Layout of a block on the free list:
+ cost center (sizeof(ULong) bytes)
this block total szB (sizeof(SizeT) bytes)
freelist previous ptr (sizeof(void*) bytes)
excess red zone bytes (if Arena.rz_szB > sizeof(void*))
@@ -87,13 +91,13 @@
Total size in bytes (bszB) and payload size in bytes (pszB)
are related by:
- bszB == pszB + 2*sizeof(SizeT) + 2*a->rz_szB
+ bszB == pszB + 2*sizeof(SizeT) + 2*a->rz_szB + sizeof(ULong)
The minimum overhead per heap block for arenas used by
the core is:
- 32-bit platforms: 2*4 + 2*4 == 16 bytes
- 64-bit platforms: 2*8 + 2*8 == 32 bytes
+ 32-bit platforms: 2*4 + 2*4 + 8 == 24 bytes
+ 64-bit platforms: 2*8 + 2*8 + 8 == 40 bytes
In both cases extra overhead may be incurred when rounding the payload
size up to VG_MIN_MALLOC_SZB.
@@ -111,6 +115,13 @@
- Superblock admin section lengths (due to elastic padding)
- Block admin section (low and high) lengths (due to elastic redzones)
- Block payload lengths (due to req_pszB rounding up)
+
+ The heap-profile cost-center field is 8 bytes even on 32 bit
+ platforms. This is so as to keep the payload field 8-aligned. On
+ a 64-bit platform, this cc-field contains a pointer to a const
+ HChar*, which is the cost center name. On 32-bit platforms, the
+ pointer lives in the lower-addressed half of the field, regardless
+ of the endianness of the host.
*/
typedef
struct {
@@ -169,6 +180,7 @@
SizeT bytes_on_loan;
SizeT bytes_mmaped;
SizeT bytes_on_loan_max;
+ SizeT next_profile_at;
}
Arena;
@@ -206,7 +218,7 @@
SizeT get_bszB_as_is ( Block* b )
{
UByte* b2 = (UByte*)b;
- SizeT bszB_lo = *(SizeT*)&b2[0];
+ SizeT bszB_lo = *(SizeT*)&b2[0 + sizeof(ULong)];
SizeT bszB_hi = *(SizeT*)&b2[mk_plain_bszB(bszB_lo) - sizeof(SizeT)];
vg_assert2(bszB_lo == bszB_hi,
"Heap block lo/hi size mismatch: lo = %llu, hi = %llu.\n"
@@ -227,7 +239,7 @@
void set_bszB ( Block* b, SizeT bszB )
{
UByte* b2 = (UByte*)b;
- *(SizeT*)&b2[0] = bszB;
+ *(SizeT*)&b2[0 + sizeof(ULong)] = bszB;
*(SizeT*)&b2[mk_plain_bszB(bszB) - sizeof(SizeT)] = bszB;
}
@@ -249,7 +261,7 @@
static __inline__
SizeT overhead_szB_lo ( Arena* a )
{
- return sizeof(SizeT) + a->rz_szB;
+ return sizeof(ULong) + sizeof(SizeT) + a->rz_szB;
}
static __inline__
SizeT overhead_szB_hi ( Arena* a )
@@ -319,7 +331,7 @@
void set_prev_b ( Block* b, Block* prev_p )
{
UByte* b2 = (UByte*)b;
- *(Block**)&b2[sizeof(SizeT)] = prev_p;
+ *(Block**)&b2[sizeof(ULong) + sizeof(SizeT)] = prev_p;
}
static __inline__
void set_next_b ( Block* b, Block* next_p )
@@ -331,7 +343,7 @@
Block* get_prev_b ( Block* b )
{
UByte* b2 = (UByte*)b;
- return *(Block**)&b2[sizeof(SizeT)];
+ return *(Block**)&b2[sizeof(ULong) + sizeof(SizeT)];
}
static __inline__
Block* get_next_b ( Block* b )
@@ -342,6 +354,22 @@
//---------------------------------------------------------------------------
+// Set and get the cost-center field of a block.
+static __inline__
+void set_cc ( Block* b, HChar* cc )
+{
+ UByte* b2 = (UByte*)b;
+ *(HChar**)&b2[0] = cc;
+}
+static __inline__
+HChar* get_cc ( Block* b )
+{
+ UByte* b2 = (UByte*)b;
+ return *(HChar**)&b2[0];
+}
+
+//---------------------------------------------------------------------------
+
// Get the block immediately preceding this one in the Superblock.
static __inline__
Block* get_predecessor_block ( Block* b )
@@ -358,7 +386,7 @@
void set_rz_lo_byte ( Arena* a, Block* b, UInt rz_byteno, UByte v )
{
UByte* b2 = (UByte*)b;
- b2[sizeof(SizeT) + rz_byteno] = v;
+ b2[sizeof(ULong) + sizeof(SizeT) + rz_byteno] = v;
}
static __inline__
void set_rz_hi_byte ( Arena* a, Block* b, UInt rz_byteno, UByte v )
@@ -370,7 +398,7 @@
UByte get_rz_lo_byte ( Arena* a, Block* b, UInt rz_byteno )
{
UByte* b2 = (UByte*)b;
- return b2[sizeof(SizeT) + rz_byteno];
+ return b2[sizeof(ULong) + sizeof(SizeT) + rz_byteno];
}
static __inline__
UByte get_rz_hi_byte ( Arena* a, Block* b, UInt rz_byteno )
@@ -420,7 +448,9 @@
// redzone size if necessary to achieve this.
a->rz_szB = rz_szB;
while (0 != overhead_szB_lo(a) % VG_MIN_MALLOC_SZB) a->rz_szB++;
- vg_assert(overhead_szB_lo(a) == overhead_szB_hi(a));
+ // vg_assert(overhead_szB_lo(a) == overhead_szB_hi(a));
+ vg_assert(0 == overhead_szB_lo(a) % VG_MIN_MALLOC_SZB);
+ vg_assert(0 == overhead_szB_hi(a) % VG_MIN_MALLOC_SZB);
a->min_sblock_szB = min_sblock_szB;
for (i = 0; i < N_MALLOC_LISTS; i++) a->freelist[i] = NULL;
@@ -431,6 +461,7 @@
a->bytes_on_loan = 0;
a->bytes_mmaped = 0;
a->bytes_on_loan_max = 0;
+ a->next_profile_at = 25 * 1000 * 1000;
vg_assert(sizeof(a->sblocks_initial)
== SBLOCKS_SIZE_INITIAL * sizeof(Superblock*));
}
@@ -448,6 +479,16 @@
}
}
+void VG_(print_arena_cc_analysis) ( void )
+{
+ UInt i;
+ vg_assert( VG_(clo_profile_heap) );
+ for (i = 0; i < VG_N_ARENAS; i++) {
+ cc_analyse_alloc_arena(i);
+ }
+}
+
+
/* This library is self-initialising, as it makes this more self-contained,
less coupled with the outside world. Hence VG_(arena_malloc)() and
VG_(arena_free)() below always call ensure_mm_init() to ensure things are
@@ -804,7 +845,7 @@
}
if (p_best < a->freelist[lno]) {
# ifdef VERBOSE_MALLOC
- VG_(printf)("retreat by %d\n", a->freelist[lno] - p_best);
+ VG_(printf)("retreat by %ld\n", (Word)(a->freelist[lno] - p_best));
# endif
a->freelist[lno] = p_best;
}
@@ -930,8 +971,8 @@
if (arena_bytes_on_loan != a->bytes_on_loan) {
# ifdef VERBOSE_MALLOC
- VG_(printf)( "sanity_check_malloc_arena: a->bytes_on_loan %d, "
- "arena_bytes_on_loan %d: "
+ VG_(printf)( "sanity_check_malloc_arena: a->bytes_on_loan %ld, "
+ "arena_bytes_on_loan %ld: "
"MISMATCH\n", a->bytes_on_loan, arena_bytes_on_loan);
# endif
ppSuperblocks(a);
@@ -991,6 +1032,110 @@
}
+#define N_AN_CCS 1000
+
+typedef struct { ULong nBytes; ULong nBlocks; HChar* cc; } AnCC;
+
+static AnCC anCCs[N_AN_CCS];
+
+static Int cmp_AnCC_by_vol ( void* v1, void* v2 ) {
+ AnCC* ancc1 = (AnCC*)v1;
+ AnCC* ancc2 = (AnCC*)v2;
+ if (ancc1->nBytes < ancc2->nBytes) return -1;
+ if (ancc1->nBytes > ancc2->nBytes) return 1;
+ return 0;
+}
+
+static void cc_analyse_alloc_arena ( ArenaId aid )
+{
+ Word i, j, k;
+ Arena* a;
+ Block* b;
+ Bool thisFree, lastWasFree;
+ SizeT b_bszB;
+
+ HChar* cc;
+ UInt n_ccs = 0;
+ //return;
+ a = arenaId_to_ArenaP(aid);
+ if (a->name == NULL) {
+ /* arena is not in use, is not initialised and will fail the
+ sanity check that follows. */
+ return;
+ }
+
+ sanity_check_malloc_arena(aid);
+
+ VG_(printf)(
+ "-------- Arena \"%s\": %ld mmap'd, %ld/%ld max/curr --------\n",
+ a->name, a->bytes_mmaped, a->bytes_on_loan_max, a->bytes_on_loan
+ );
+
+ for (j = 0; j < a->sblocks_used; ++j) {
+ Superblock * sb = a->sblocks[j];
+ lastWasFree = False;
+ for (i = 0; i < sb->n_payload_bytes; i += mk_plain_bszB(b_bszB)) {
+ b = (Block*)&sb->payload_bytes[i];
+ b_bszB = get_bszB_as_is(b);
+ if (!blockSane(a, b)) {
+ VG_(printf)("sanity_check_malloc_arena: sb %p, block %ld "
+ "(bszB %lu): BAD\n", sb, i, b_bszB );
+ tl_assert(0);
+ }
+ thisFree = !is_inuse_block(b);
+ if (thisFree && lastWasFree) {
+ VG_(printf)("sanity_check_malloc_arena: sb %p, block %ld "
+ "(bszB %lu): UNMERGED FREES\n", sb, i, b_bszB );
+ tl_assert(0);
+ }
+ lastWasFree = thisFree;
+
+ if (thisFree) continue;
+
+ if (0)
+ VG_(printf)("block: inUse=%d pszB=%d cc=%s\n",
+ (Int)(!thisFree),
+ (Int)bszB_to_pszB(a, b_bszB),
+ get_cc(b));
+ cc = get_cc(b);
+ tl_assert(cc);
+ for (k = 0; k < n_ccs; k++) {
+ tl_assert(anCCs[k].cc);
+ if (0 == VG_(strcmp)(cc, anCCs[k].cc))
+ break;
+ }
+ tl_assert(k >= 0 && k <= n_ccs);
+
+ if (k == n_ccs) {
+ tl_assert(n_ccs < N_AN_CCS-1);
+ n_ccs++;
+ anCCs[k].nBytes = 0;
+ anCCs[k].nBlocks = 0;
+ anCCs[k].cc = cc;
+ }
+
+ tl_assert(k >= 0 && k < n_ccs && k < N_AN_CCS);
+ anCCs[k].nBytes += (ULong)bszB_to_pszB(a, b_bszB);
+ anCCs[k].nBlocks++;
+ }
+ if (i > sb->n_payload_bytes) {
+ VG_(printf)( "sanity_check_malloc_arena: sb %p: last block "
+ "overshoots end\n", sb);
+ tl_assert(0);
+ }
+ }
+
+ VG_(ssort)( &anCCs[0], n_ccs, sizeof(anCCs[0]), cmp_AnCC_by_vol );
+
+ for (k = 0; k < n_ccs; k++) {
+ VG_(printf)("%'13llu in %'9llu: %s\n",
+ anCCs[k].nBytes, anCCs[k].nBlocks, anCCs[k].cc );
+ }
+
+ VG_(printf)("\n");
+}
+
+
void VG_(sanity_check_malloc_all) ( void )
{
UInt i;
@@ -1092,7 +1237,7 @@
return ((req_pszB + n) & (~n));
}
-void* VG_(arena_malloc) ( ArenaId aid, SizeT req_pszB )
+void* VG_(arena_malloc) ( ArenaId aid, HChar* cc, SizeT req_pszB )
{
SizeT req_bszB, frag_bszB, b_bszB;
UInt lno, i;
@@ -1108,6 +1253,10 @@
req_pszB = align_req_pszB(req_pszB);
req_bszB = pszB_to_bszB(a, req_pszB);
+ // You must provide a cost-center name against which to charge
+ // this allocation; it isn't optional.
+ vg_assert(cc);
+
// Scan through all the big-enough freelists for a block.
//
// Nb: this scanning might be expensive in some cases. Eg. if you
@@ -1185,6 +1334,7 @@
b = (Block*)&new_sb->payload_bytes[0];
lno = pszB_to_listNo(bszB_to_pszB(a, new_sb->n_payload_bytes));
mkFreeBlock ( a, b, new_sb->n_payload_bytes, lno);
+ set_cc(b, "admin.free-new-sb-1");
// fall through
obtained_block:
@@ -1205,19 +1355,31 @@
// printf( "split %dB into %dB and %dB\n", b_bszB, req_bszB, frag_bszB );
unlinkBlock(a, b, lno);
mkInuseBlock(a, b, req_bszB);
+ set_cc(b, cc);
mkFreeBlock(a, &b[req_bszB], frag_bszB,
pszB_to_listNo(bszB_to_pszB(a, frag_bszB)));
+ set_cc(&b[req_bszB], "admin.fragmentation-1");
b_bszB = get_bszB(b);
} else {
// No, mark as in use and use as-is.
unlinkBlock(a, b, lno);
mkInuseBlock(a, b, b_bszB);
+ set_cc(b, cc);
}
// Update stats
a->bytes_on_loan += bszB_to_pszB(a, b_bszB);
- if (a->bytes_on_loan > a->bytes_on_loan_max)
+ if (a->bytes_on_loan > a->bytes_on_loan_max) {
a->bytes_on_loan_max = a->bytes_on_loan;
+ if (a->bytes_on_loan_max >= a->next_profile_at) {
+ /* next profile after 10% more growth */
+ a->next_profile_at
+ = (SizeT)(
+ (((ULong)a->bytes_on_loan_max) * 110ULL) / 100ULL );
+ if (VG_(clo_profile_heap))
+ cc_analyse_alloc_arena(aid);
+ }
+ }
# ifdef DEBUG_MALLOC
sanity_check_malloc_arena(aid);
@@ -1286,6 +1448,7 @@
// Put this chunk back on a list somewhere.
b_listno = pszB_to_listNo(b_pszB);
mkFreeBlock( a, b, b_bszB, b_listno );
+ set_cc(b, "admin.free-1");
// See if this block can be merged with its successor.
// First test if we're far enough before the superblock's end to possibly
@@ -1304,6 +1467,7 @@
b_bszB += other_bszB;
b_listno = pszB_to_listNo(bszB_to_pszB(a, b_bszB));
mkFreeBlock( a, b, b_bszB, b_listno );
+ set_cc(b, "admin.free-2");
}
} else {
// Not enough space for successor: check that b is the last block
@@ -1326,6 +1490,7 @@
b_bszB += other_bszB;
b_listno = pszB_to_listNo(bszB_to_pszB(a, b_bszB));
mkFreeBlock( a, b, b_bszB, b_listno );
+ set_cc(b, "admin.free-3");
}
} else {
// Not enough space for predecessor: check that b is the first block,
@@ -1373,7 +1538,8 @@
. . . . . . .
*/
-void* VG_(arena_memalign) ( ArenaId aid, SizeT req_alignB, SizeT req_pszB )
+void* VG_(arena_memalign) ( ArenaId aid, HChar* cc,
+ SizeT req_alignB, SizeT req_pszB )
{
SizeT base_pszB_req, base_pszB_act, frag_bszB;
Block *base_b, *align_b;
@@ -1386,6 +1552,10 @@
vg_assert(req_pszB < MAX_PSZB);
+ // You must provide a cost-center name against which to charge
+ // this allocation; it isn't optional.
+ vg_assert(cc);
+
// Check that the requested alignment seems reasonable; that is, is
// a power of 2.
if (req_alignB < VG_MIN_MALLOC_SZB
@@ -1408,7 +1578,7 @@
/* Payload ptr for the block we are going to split. Note this
changes a->bytes_on_loan; we save and restore it ourselves. */
saved_bytes_on_loan = a->bytes_on_loan;
- base_p = VG_(arena_malloc) ( aid, base_pszB_req );
+ base_p = VG_(arena_malloc) ( aid, cc, base_pszB_req );
a->bytes_on_loan = saved_bytes_on_loan;
/* Give up if we couldn't allocate enough space */
@@ -1437,11 +1607,13 @@
/* Create the fragment block, and put it back on the relevant free list. */
mkFreeBlock ( a, base_b, frag_bszB,
pszB_to_listNo(bszB_to_pszB(a, frag_bszB)) );
+ set_cc(base_b, "admin.frag-memalign-1");
/* Create the aligned block. */
mkInuseBlock ( a, align_b,
base_p + base_pszB_act
+ overhead_szB_hi(a) - (UByte*)align_b );
+ set_cc(align_b, cc);
/* Final sanity checks. */
vg_assert( is_inuse_block(get_payload_block(a, align_p)) );
@@ -1538,7 +1710,8 @@
/*--- Services layered on top of malloc/free. ---*/
/*------------------------------------------------------------*/
-void* VG_(arena_calloc) ( ArenaId aid, SizeT nmemb, SizeT bytes_per_memb )
+void* VG_(arena_calloc) ( ArenaId aid, HChar* cc,
+ SizeT nmemb, SizeT bytes_per_memb )
{
SizeT size;
UChar* p;
@@ -1546,7 +1719,7 @@
size = nmemb * bytes_per_memb;
vg_assert(size >= nmemb && size >= bytes_per_memb);// check against overflow
- p = VG_(arena_malloc) ( aid, size );
+ p = VG_(arena_malloc) ( aid, cc, size );
VG_(memset)(p, 0, size);
@@ -1556,7 +1729,8 @@
}
-void* VG_(arena_realloc) ( ArenaId aid, void* ptr, SizeT req_pszB )
+void* VG_(arena_realloc) ( ArenaId aid, HChar* cc,
+ void* ptr, SizeT req_pszB )
{
Arena* a;
SizeT old_pszB;
@@ -1578,7 +1752,7 @@
return ptr;
}
- p_new = VG_(arena_malloc) ( aid, req_pszB );
+ p_new = VG_(arena_malloc) ( aid, cc, req_pszB );
VG_(memcpy)(p_new, ptr, old_pszB);
@@ -1589,7 +1763,8 @@
/* Inline just for the wrapper VG_(strdup) below */
-__inline__ Char* VG_(arena_strdup) ( ArenaId aid, const Char* s )
+__inline__ Char* VG_(arena_strdup) ( ArenaId aid, HChar* cc,
+ const Char* s )
{
Int i;
Int len;
@@ -1599,7 +1774,7 @@
return NULL;
len = VG_(strlen)(s) + 1;
- res = VG_(arena_malloc) (aid, len);
+ res = VG_(arena_malloc) (aid, cc, len);
for (i = 0; i < len; i++)
res[i] = s[i];
@@ -1613,9 +1788,9 @@
// All just wrappers to avoid exposing arenas to tools.
-void* VG_(malloc) ( SizeT nbytes )
+void* VG_(malloc) ( HChar* cc, SizeT nbytes )
{
- return VG_(arena_malloc) ( VG_AR_TOOL, nbytes );
+ return VG_(arena_malloc) ( VG_AR_TOOL, cc, nbytes );
}
void VG_(free) ( void* ptr )
@@ -1623,19 +1798,19 @@
VG_(arena_free) ( VG_AR_TOOL, ptr );
}
-void* VG_(calloc) ( SizeT nmemb, SizeT bytes_per_memb )
+void* VG_(calloc) ( HChar* cc, SizeT nmemb, SizeT bytes_per_memb )
{
- return VG_(arena_calloc) ( VG_AR_TOOL, nmemb, bytes_per_memb );
+ return VG_(arena_calloc) ( VG_AR_TOOL, cc, nmemb, bytes_per_memb );
}
-void* VG_(realloc) ( void* ptr, SizeT size )
+void* VG_(realloc) ( HChar* cc, void* ptr, SizeT size )
{
- return VG_(arena_realloc) ( VG_AR_TOOL, ptr, size );
+ return VG_(arena_realloc) ( VG_AR_TOOL, cc, ptr, size );
}
-Char* VG_(strdup) ( const Char* s )
+Char* VG_(strdup) ( HChar* cc, const Char* s )
{
- return VG_(arena_strdup) ( VG_AR_TOOL, s );
+ return VG_(arena_strdup) ( VG_AR_TOOL, cc, s );
}
// Useful for querying user blocks.
Modified: branches/YARD/coregrind/pub_core_mallocfree.h
===================================================================
--- branches/YARD/coregrind/pub_core_mallocfree.h 2008-09-06 18:59:27 UTC (rev 8569)
+++ branches/YARD/coregrind/pub_core_mallocfree.h 2008-09-06 19:12:41 UTC (rev 8570)
@@ -86,14 +86,16 @@
int keepcost; /* top-most, releasable (via malloc_trim) space */
};
-extern void* VG_(arena_malloc) ( ArenaId arena, SizeT nbytes );
+extern void* VG_(arena_malloc) ( ArenaId arena, HChar* cc, SizeT nbytes );
extern void VG_(arena_free) ( ArenaId arena, void* ptr );
-extern void* VG_(arena_calloc) ( ArenaId arena,
+extern void* VG_(arena_calloc) ( ArenaId arena, HChar* cc,
SizeT nmemb, SizeT bytes_per_memb );
-extern void* VG_(arena_realloc) ( ArenaId arena, void* ptr, SizeT size );
-extern void* VG_(arena_memalign)( ArenaId aid, SizeT req_alignB,
- SizeT req_pszB );
-extern Char* VG_(arena_strdup) ( ArenaId aid, const Char* s);
+extern void* VG_(arena_realloc) ( ArenaId arena, HChar* cc,
+ void* ptr, SizeT size );
+extern void* VG_(arena_memalign)( ArenaId aid, HChar* cc,
+ SizeT req_alignB, SizeT req_pszB );
+extern Char* VG_(arena_strdup) ( ArenaId aid, HChar* cc,
+ const Char* s);
// Nb: The ThreadId doesn't matter, it's not used.
extern SizeT VG_(arena_payload_szB) ( ThreadId tid, ArenaId aid, void* payload );
@@ -104,6 +106,8 @@
extern void VG_(print_all_arena_stats) ( void );
+extern void VG_(print_arena_cc_analysis) ( void );
+
#endif // __PUB_CORE_MALLOCFREE_H
/*--------------------------------------------------------------------*/
Modified: branches/YARD/include/pub_tool_mallocfree.h
===================================================================
--- branches/YARD/include/pub_tool_mallocfree.h 2008-09-06 18:59:27 UTC (rev 8569)
+++ branches/YARD/include/pub_tool_mallocfree.h 2008-09-06 19:12:41 UTC (rev 8570)
@@ -35,11 +35,11 @@
// These can be for allocating memory used by tools.
// Nb: the allocators *always succeed* -- they never return NULL (Valgrind
// will abort if they can't allocate the memory).
-extern void* VG_(malloc) ( SizeT nbytes );
+extern void* VG_(malloc) ( HChar* cc, SizeT nbytes );
extern void VG_(free) ( void* p );
-extern void* VG_(calloc) ( SizeT n, SizeT bytes_per_elem );
-extern void* VG_(realloc) ( void* p, SizeT size );
-extern Char* VG_(strdup) ( const Char* s );
+extern void* VG_(calloc) ( HChar* cc, SizeT n, SizeT bytes_per_elem );
+extern void* VG_(realloc) ( HChar* cc, void* p, SizeT size );
+extern Char* VG_(strdup) ( HChar* cc, const Char* s );
// Returns the usable size of a heap-block. It's the asked-for size plus
// possibly some more due to rounding up.
|
|
From: <sv...@va...> - 2008-09-06 18:59:18
|
Author: sewardj
Date: 2008-09-06 19:59:27 +0100 (Sat, 06 Sep 2008)
New Revision: 8569
Log:
This should have been committed as part of r8562, but was forgotten:
* libhb_core.c: add hashing to backtrace comparison for the
event-map's context-tree
* libhb_core.c: fix massive space leak in msm_handle_{read,write}
* add cost-center annotations to all allocation points
Modified:
branches/YARD/helgrind/libhb_core.c
Modified: branches/YARD/helgrind/libhb_core.c
===================================================================
--- branches/YARD/helgrind/libhb_core.c 2008-09-06 18:57:19 UTC (rev 8568)
+++ branches/YARD/helgrind/libhb_core.c 2008-09-06 18:59:27 UTC (rev 8569)
@@ -50,7 +50,7 @@
/* fwds for
Globals needed by other parts of the library. These are set
once at startup and then never changed. */
-static void* (*main_zalloc_P)( SizeT ) = NULL;
+static void* (*main_zalloc_P)( HChar*, SizeT ) = NULL;
static void (*main_dealloc_P)( void* ) = NULL;
static void* (*main_shadow_alloc_P)( SizeT ) = NULL;
static void (*main_get_stacktrace)( Thr*, Addr*, UWord ) = NULL;
@@ -95,11 +95,11 @@
}
-static void* main_zalloc ( SizeT n ) {
+static void* main_zalloc ( HChar* cc, SizeT n ) {
void* v;
tl_assert(n >= 0);
stats__zallocd += (ULong)n;
- v = main_zalloc_P(n);
+ v = main_zalloc_P(cc,n);
// notify_malloc( (Addr)v, n );
return v;
}
@@ -199,7 +199,8 @@
/* Create new XArray, using given allocation and free function, and
for elements of the specified size. Alloc fn must not fail (that
is, if it returns it must have succeeded.) */
-extern XArray* VG_(newXA) ( void*(*alloc_fn)(SizeT),
+extern XArray* VG_(newXA) ( void*(*alloc_fn)(HChar*,SizeT),
+ HChar* cc,
void(*free_fn)(void*),
Word elemSzB );
@@ -293,7 +294,8 @@
/* See pub_tool_xarray.h for details of what this is all about. */
struct _XArray {
- void* (*alloc) ( SizeT ); /* alloc fn (nofail) */
+ void* (*alloc) ( HChar*, SizeT ); /* alloc fn (nofail) */
+ HChar* cc;
void (*free) ( void* ); /* free fn */
Int (*cmpFn) ( void*, void* ); /* cmp fn (may be NULL) */
Word elemSzB; /* element size in bytes */
@@ -304,7 +306,8 @@
};
-XArray* VG_(newXA) ( void*(*alloc_fn)(SizeT),
+XArray* VG_(newXA) ( void*(*alloc_fn)(HChar*,SizeT),
+ HChar* cc,
void(*free_fn)(void*),
Word elemSzB )
{
@@ -318,9 +321,10 @@
vg_assert(alloc_fn);
vg_assert(free_fn);
vg_assert(elemSzB > 0);
- xa = alloc_fn( sizeof(struct _XArray) );
+ xa = alloc_fn( cc, sizeof(struct _XArray) );
vg_assert(xa);
xa->alloc = alloc_fn;
+ xa->cc = cc;
xa->free = free_fn;
xa->cmpFn = NULL;
xa->elemSzB = elemSzB;
@@ -339,14 +343,14 @@
vg_assert(xa->alloc);
vg_assert(xa->free);
vg_assert(xa->elemSzB >= 1);
- nyu = xa->alloc( sizeof(struct _XArray) );
+ nyu = xa->alloc( xa->cc, sizeof(struct _XArray) );
if (!nyu)
return NULL;
/* Copy everything verbatim ... */
*nyu = *xa;
/* ... except we have to clone the contents-array */
if (nyu->arr) {
- nyu->arr = nyu->alloc( nyu->totsizeE * nyu->elemSzB );
+ nyu->arr = nyu->alloc( nyu->cc, nyu->totsizeE * nyu->elemSzB );
if (!nyu->arr) {
nyu->free(nyu);
return NULL;
@@ -409,7 +413,7 @@
if (0)
VG_(printf)("addToXA: increasing from %ld to %ld\n",
xa->totsizeE, newsz);
- tmp = xa->alloc(newsz * xa->elemSzB);
+ tmp = xa->alloc(xa->cc, newsz * xa->elemSzB);
vg_assert(tmp);
if (xa->usedsizeE > 0)
VG_(memcpy)(tmp, xa->arr, xa->usedsizeE * xa->elemSzB);
@@ -616,7 +620,8 @@
sections of the map, or the whole thing. If kCmp is NULL then the
ordering used is unsigned word ordering (UWord) on the key
values. */
-WordFM* HG_(newFM) ( void* (*alloc_nofail)( SizeT ),
+WordFM* HG_(newFM) ( void* (*alloc_nofail)( HChar*, SizeT ),
+ HChar* cc,
void (*dealloc)(void*),
Word (*kCmp)(UWord,UWord) );
@@ -699,7 +704,8 @@
// FIXME! find some way to turn this back into an abstract type.
typedef
struct {
- void* (*alloc_nofail)( SizeT );
+ void* (*alloc_nofail)( HChar*, SizeT );
+ HChar* cc;
void (*dealloc)(void*);
UWord firstWord;
UWord firstCount;
@@ -715,7 +721,8 @@
/* Initialise a WordBag and make it empty. Only do this once for each
bag, at the start of its lifetime. */
void HG_(initBag) ( WordBag* bag,
- void* (*alloc_nofail)( SizeT ),
+ void* (*alloc_nofail)( HChar*, SizeT ),
+ HChar* cc,
void (*dealloc)(void*) );
/* Remove all elements from a bag, thereby making it empty, and free
@@ -867,7 +874,8 @@
struct _WordFM {
AvlNode* root;
- void* (*alloc_nofail)( SizeT );
+ void* (*alloc_nofail)( HChar*, SizeT );
+ HChar* cc;
void (*dealloc)(void*);
Word (*kCmp)(UWord,UWord);
AvlNode* nodeStack[WFM_STKMAX]; // Iterator node stack
@@ -1246,12 +1254,13 @@
UWord(*dopyK)(UWord),
UWord(*dopyV)(UWord),
UWord(*dopyW)(UWord),
- void*(alloc_nofail)(SizeT) )
+ void*(alloc_nofail)(HChar*,SizeT),
+ HChar* cc )
{
AvlNode* nyu;
if (! nd)
return NULL;
- nyu = alloc_nofail(sizeof(AvlNode));
+ nyu = alloc_nofail(cc, sizeof(AvlNode));
tl_assert(nyu);
nyu->child[0] = nd->child[0];
@@ -1291,13 +1300,13 @@
/* Copy subtrees */
if (nyu->child[0]) {
nyu->child[0] = avl_dopy( nyu->child[0],
- dopyK, dopyV, dopyW, alloc_nofail );
+ dopyK, dopyV, dopyW, alloc_nofail, cc );
if (! nyu->child[0])
return NULL;
}
if (nyu->child[1]) {
nyu->child[1] = avl_dopy( nyu->child[1],
- dopyK, dopyV, dopyW, alloc_nofail );
+ dopyK, dopyV, dopyW, alloc_nofail, cc );
if (! nyu->child[1])
return NULL;
}
@@ -1307,13 +1316,15 @@
/* Initialise a WordFM. */
static void initFM ( WordFM* fm,
- void* (*alloc_nofail)( SizeT ),
+ void* (*alloc_nofail)( HChar*, SizeT ),
+ HChar* cc,
void (*dealloc)(void*),
Word (*kCmp)(UWord,UWord) )
{
fm->root = NULL;
fm->kCmp = kCmp;
fm->alloc_nofail = alloc_nofail;
+ fm->cc = cc;
fm->dealloc = dealloc;
fm->stackTop = 0;
}
@@ -1327,13 +1338,14 @@
sections of the map, or the whole thing. If kCmp is NULL then the
ordering used is unsigned word ordering (UWord) on the key
values. */
-WordFM* HG_(newFM) ( void* (*alloc_nofail)( SizeT ),
+WordFM* HG_(newFM) ( void* (*alloc_nofail)( HChar*, SizeT ),
+ HChar* cc,
void (*dealloc)(void*),
Word (*kCmp)(UWord,UWord) )
{
- WordFM* fm = alloc_nofail(sizeof(WordFM));
+ WordFM* fm = alloc_nofail(cc, sizeof(WordFM));
tl_assert(fm);
- initFM(fm, alloc_nofail, dealloc, kCmp);
+ initFM(fm, alloc_nofail, cc, dealloc, kCmp);
return fm;
}
@@ -1377,7 +1389,7 @@
{
MaybeWord oldV;
AvlNode* node;
- node = fm->alloc_nofail( sizeof(struct _AvlNode) );
+ node = fm->alloc_nofail( fm->cc, sizeof(struct _AvlNode) );
node->key = k;
node->val = v;
node->wal = w;
@@ -1575,7 +1587,7 @@
/* can't clone the fm whilst iterating on it */
tl_assert(fm->stackTop == 0);
- nyu = fm->alloc_nofail( sizeof(WordFM) );
+ nyu = fm->alloc_nofail( fm->cc, sizeof(WordFM) );
tl_assert(nyu);
*nyu = *fm;
@@ -1586,7 +1598,8 @@
if (nyu->root) {
nyu->root = avl_dopy( nyu->root,
- dopyK, dopyV, dopyW, fm->alloc_nofail );
+ dopyK, dopyV, dopyW,
+ fm->alloc_nofail, fm->cc );
if (! nyu->root)
return NULL;
}
@@ -1648,10 +1661,12 @@
}
void HG_(initBag) ( WordBag* bag,
- void* (*alloc_nofail)( SizeT ),
+ void* (*alloc_nofail)( HChar*, SizeT ),
+ HChar* cc,
void (*dealloc)(void*) )
{
bag->alloc_nofail = alloc_nofail;
+ bag->cc = cc;
bag->dealloc = dealloc;
bag->firstWord = 0;
bag->firstCount = 0;
@@ -1691,7 +1706,7 @@
/* it's not the Distinguished Element. Try the rest */
{ UWord key, count;
if (bag->rest == NULL) {
- bag->rest = HG_(newFM)( bag->alloc_nofail, bag->dealloc,
+ bag->rest = HG_(newFM)( bag->alloc_nofail, bag->cc, bag->dealloc,
NULL/*unboxed uword cmp*/ );
}
tl_assert(bag->rest);
@@ -2630,7 +2645,8 @@
/* No free F line found. Expand existing array and try again. */
new_size = sm->linesF_size==0 ? 1 : 2 * sm->linesF_size;
- nyu = main_zalloc( new_size * sizeof(LineF) );
+ nyu = main_zalloc( "libhb.aFfw.1 (LineF storage)",
+ new_size * sizeof(LineF) );
tl_assert(nyu);
stats__secmap_linesF_allocd += (new_size - sm->linesF_size);
@@ -4092,7 +4108,8 @@
rcdec = p_rcdec;
tl_assert(map_shmem == NULL);
- map_shmem = HG_(newFM)( main_zalloc, main_dealloc,
+ map_shmem = HG_(newFM)( main_zalloc, "libhb.zsm_init.1 (map_shmem)",
+ main_dealloc,
NULL/*unboxed UWord cmp*/);
tl_assert(map_shmem != NULL);
shmem__invalidate_scache();
@@ -4216,10 +4233,11 @@
VTS* VTS__new ( void )
{
VTS* vts;
- vts = main_zalloc( sizeof(VTS) );
+ vts = main_zalloc( "libhb.VTS__new.1", sizeof(VTS) );
tl_assert(vts);
vts->id = VtsID_INVALID;
- vts->ts = VG_(newXA)( main_zalloc, main_dealloc, sizeof(ScalarTS) );
+ vts->ts = VG_(newXA)( main_zalloc, "libhb.VTS__new.2",
+ main_dealloc, sizeof(ScalarTS) );
tl_assert(vts->ts);
return vts;
}
@@ -4631,7 +4649,8 @@
static void vts_set_init ( void )
{
tl_assert(!vts_set);
- vts_set = HG_(newFM)( main_zalloc, main_dealloc,
+ vts_set = HG_(newFM)( main_zalloc, "libhb.vts_set_init.1",
+ main_dealloc,
(Word(*)(UWord,UWord))VTS__cmp_structural );
tl_assert(vts_set);
}
@@ -4707,7 +4726,8 @@
static void vts_tab_init ( void )
{
vts_tab
- = VG_(newXA)( main_zalloc, main_dealloc, sizeof(VtsTE) );
+ = VG_(newXA)( main_zalloc, "libhb.vts_tab_init.1",
+ main_dealloc, sizeof(VtsTE) );
vts_tab_freelist
= VtsID_INVALID;
tl_assert(vts_tab);
@@ -5088,7 +5108,7 @@
};
static Thr* Thr__new ( void ) {
- Thr* thr = main_zalloc( sizeof(Thr) );
+ Thr* thr = main_zalloc( "libhb.Thr__new.1", sizeof(Thr) );
thr->viR = VtsID_INVALID;
thr->viW = VtsID_INVALID;
return thr;
@@ -5263,7 +5283,7 @@
// //
/////////////////////////////////////////////////////////
-#define EVENT_MAP_GC_AT 500000
+#define EVENT_MAP_GC_AT 1000000
#define EVENT_MAP_GC_DISCARD_FRACTION 0.5
/* This is in two parts:
@@ -5304,7 +5324,7 @@
UWord magic;
UWord rc;
UWord rcX; /* used for crosschecking */
- UWord frames[N_FRAMES];
+ UWord frames[1 + N_FRAMES]; /* first word is hash of all the rest */
}
RCEC;
@@ -5316,7 +5336,9 @@
Word i;
tl_assert(ec1 && ec1->magic == RCEC_MAGIC);
tl_assert(ec2 && ec2->magic == RCEC_MAGIC);
- for (i = 0; i < N_FRAMES; i++) {
+ if (ec1->frames[0] < ec2->frames[0]) return -1;
+ if (ec1->frames[0] > ec2->frames[0]) return 1;
+ for (i = 1; i < 1 + N_FRAMES; i++) {
if (ec1->frames[i] < ec2->frames[i]) return -1;
if (ec1->frames[i] > ec2->frames[i]) return 1;
}
@@ -5370,13 +5392,27 @@
return copy;
}
+static inline UWord ROLW ( UWord w, Int n )
+{
+ Int bpw = 8 * sizeof(UWord);
+ w = (w << n) | (w >> (bpw-n));
+ return w;
+}
+
static RCEC* get_RCEC ( Thr* thr )
{
- RCEC example;
+ UWord hash, i;
+ RCEC example;
example.magic = RCEC_MAGIC;
example.rc = 0;
example.rcX = 0;
- main_get_stacktrace( thr, &example.frames[0], N_FRAMES );
+ main_get_stacktrace( thr, &example.frames[1], N_FRAMES );
+ hash = 0;
+ for (i = 1; i < 1 + N_FRAMES; i++) {
+ hash ^= example.frames[i];
+ hash = ROLW(hash, 19);
+ }
+ example.frames[0] = hash;
return ctxt__find_or_add( &example );
}
@@ -5410,7 +5446,7 @@
static UWord oldrefTreeN = 0; /* # elems in oldrefTree */
static UWord oldrefGenIncAt = 0; /* inc gen # when size hits this */
-static void event_map_bind ( Addr a, struct EC_* ec, Thr* thr )
+static void event_map_bind ( Addr a, struct EC_* ecxx, Thr* thr )
{
OldRef key, *ref;
RCEC* here;
@@ -5463,7 +5499,7 @@
tl_assert(ref->magic == OldRef_MAGIC);
tl_assert(ref->rcec);
tl_assert(ref->rcec->magic == RCEC_MAGIC);
- *resEC = main_stacktrace_to_EC(&ref->rcec->frames[0], N_FRAMES);
+ *resEC = main_stacktrace_to_EC(&ref->rcec->frames[1], N_FRAMES);
*resThr = ref->thr;
return True;
} else {
@@ -5477,7 +5513,8 @@
contextTree = VG_(OSetGen_Create)(
0,
(Word(*)(const void *, const void*))RCEC__cmp_by_frames,
- main_zalloc, main_dealloc
+ main_zalloc, "libhb.event_map_init.1 (context tree)",
+ main_dealloc
);
tl_assert(contextTree);
@@ -5485,7 +5522,8 @@
oldrefTree = VG_(OSetGen_Create)(
0,
(Word(*)(const void *, const void*))OldRef__cmp_by_EA,
- main_zalloc, main_dealloc
+ main_zalloc, "libhb.event_map_init.2 (oldref tree)",
+ main_dealloc
);
tl_assert(oldrefTree);
@@ -5542,7 +5580,8 @@
/* Compute the distribution of generation values in the ref tree */
/* genMap :: generation-number -> count-of-nodes-with-that-number */
- genMap = HG_(newFM)( main_zalloc, main_dealloc, NULL );
+ genMap = HG_(newFM)( main_zalloc, "libhb.emmG.1",
+ main_dealloc, NULL );
VG_(OSetGen_ResetIter)( oldrefTree );
while ( (oldref = VG_(OSetGen_Next)( oldrefTree )) ) {
@@ -5592,7 +5631,8 @@
stuff from it, so first we need to copy them off somewhere
else. (sigh) */
XArray* refs2del;
- refs2del = VG_(newXA)( main_zalloc, main_dealloc, sizeof(OldRef*) );
+ refs2del = VG_(newXA)( main_zalloc, "libhb.emmG.1",
+ main_dealloc, sizeof(OldRef*) );
VG_(OSetGen_ResetIter)( oldrefTree );
while ( (oldref = VG_(OSetGen_Next)( oldrefTree )) ) {
@@ -5748,7 +5788,7 @@
tl_assert(svNew != SVal_INVALID);
if (svNew != svOld) {
if (MSM_CONFACC && SVal__isC(svOld) && SVal__isC(svNew)) {
- struct EC_* ec = main_get_EC( info->acc_thr );
+ struct EC_* ec = NULL; //main_get_EC( info->acc_thr );
event_map_bind( info->ea, ec, info->acc_thr );
stats__msm_read_change++;
}
@@ -5813,7 +5853,7 @@
tl_assert(svNew != SVal_INVALID);
if (svNew != svOld) {
if (MSM_CONFACC && SVal__isC(svOld) && SVal__isC(svNew)) {
- struct EC_* ec = main_get_EC( info->acc_thr );
+ struct EC_* ec = NULL; //main_get_EC( info->acc_thr );
event_map_bind( info->ea, ec, info->acc_thr );
stats__msm_write_change++;
}
@@ -5838,7 +5878,7 @@
};
static SO* SO__Alloc ( void ) {
- SO* so = main_zalloc( sizeof(SO) );
+ SO* so = main_zalloc( "libhb.SO__Alloc.1", sizeof(SO) );
so->viR = VtsID_INVALID;
so->viW = VtsID_INVALID;
so->magic = SO_MAGIC;
@@ -5883,7 +5923,7 @@
Thr* libhb_init (
- void* (*zalloc)( SizeT ),
+ void* (*zalloc)( HChar*, SizeT ),
void (*dealloc)( void* ),
void* (*shadow_alloc)( SizeT ),
void (*get_stacktrace)( Thr*, Addr*, UWord ),
|
|
From: <sv...@va...> - 2008-09-06 18:57:09
|
Author: sewardj
Date: 2008-09-06 19:57:19 +0100 (Sat, 06 Sep 2008)
New Revision: 8568
Log:
Add cost-center annotations to all allocation points in Massif.
Modified:
branches/YARD/massif/ms_main.c
Modified: branches/YARD/massif/ms_main.c
===================================================================
--- branches/YARD/massif/ms_main.c 2008-09-06 18:55:41 UTC (rev 8567)
+++ branches/YARD/massif/ms_main.c 2008-09-06 18:57:19 UTC (rev 8568)
@@ -292,7 +292,8 @@
static void init_alloc_fns(void)
{
// Create the list, and add the default elements.
- alloc_fns = VG_(newXA)(VG_(malloc), VG_(free), sizeof(Char*));
+ alloc_fns = VG_(newXA)(VG_(malloc), "ms.main.iaf.1",
+ VG_(free), sizeof(Char*));
#define DO(x) { Char* s = x; VG_(addToXA)(alloc_fns, &s); }
// Ordered according to (presumed) frequency.
@@ -583,11 +584,13 @@
if (parent->n_children == parent->max_children) {
if (parent->max_children == 0) {
parent->max_children = 4;
- parent->children = VG_(malloc)( parent->max_children * sizeof(XPt*) );
+ parent->children = VG_(malloc)( "ms.main.acx.1",
+ parent->max_children * sizeof(XPt*) );
n_xpt_init_expansions++;
} else {
parent->max_children *= 2; // Double size
- parent->children = VG_(realloc)( parent->children,
+ parent->children = VG_(realloc)( "ms.main.acx.2",
+ parent->children,
parent->max_children * sizeof(XPt*) );
n_xpt_later_expansions++;
}
@@ -650,7 +653,7 @@
n_child_sxpts = n_sig_children + ( n_insig_children > 0 ? 1 : 0 );
// Duplicate the XPt.
- sxpt = VG_(malloc)(sizeof(SXPt));
+ sxpt = VG_(malloc)("ms.main.dX.1", sizeof(SXPt));
n_sxpt_allocs++;
sxpt->tag = SigSXPt;
sxpt->szB = xpt->szB;
@@ -661,7 +664,8 @@
if (n_child_sxpts > 0) {
Int j;
SizeT sig_children_szB = 0, insig_children_szB = 0;
- sxpt->Sig.children = VG_(malloc)(n_child_sxpts * sizeof(SXPt*));
+ sxpt->Sig.children = VG_(malloc)("ms.main.dX.2",
+ n_child_sxpts * sizeof(SXPt*));
// Duplicate the significant children. (Nb: sig_children_szB +
// insig_children_szB doesn't necessarily equal xpt->szB.)
@@ -680,7 +684,7 @@
if (n_insig_children > 0) {
// Nb: We 'n_sxpt_allocs' here because creating an Insig SXPt
// doesn't involve a call to dup_XTree().
- SXPt* insig_sxpt = VG_(malloc)(sizeof(SXPt));
+ SXPt* insig_sxpt = VG_(malloc)("ms.main.dX.3", sizeof(SXPt));
n_sxpt_allocs++;
insig_sxpt->tag = InsigSXPt;
insig_sxpt->szB = insig_children_szB;
@@ -1478,7 +1482,7 @@
}
// Make new HP_Chunk node, add to malloc_list
- hc = VG_(malloc)(sizeof(HP_Chunk));
+ hc = VG_(malloc)("ms.main.nb.1", sizeof(HP_Chunk));
hc->req_szB = req_szB;
hc->slop_szB = slop_szB;
hc->data = (Addr)p;
@@ -2016,7 +2020,8 @@
if (is_detailed_snapshot(snapshot)) {
// Detailed snapshot -- print heap tree.
Int depth_str_len = clo_depth + 3;
- Char* depth_str = VG_(malloc)(sizeof(Char) * depth_str_len);
+ Char* depth_str = VG_(malloc)("ms.main.pps.1",
+ sizeof(Char) * depth_str_len);
SizeT snapshot_total_szB =
snapshot->heap_szB + snapshot->heap_extra_szB + snapshot->stacks_szB;
depth_str[0] = '\0'; // Initialise depth_str to "".
@@ -2184,7 +2189,8 @@
}
// Initialise snapshot array, and sanity-check it.
- snapshots = VG_(malloc)(sizeof(Snapshot) * clo_max_snapshots);
+ snapshots = VG_(malloc)("ms.main.mpoci.1",
+ sizeof(Snapshot) * clo_max_snapshots);
// We don't want to do snapshot sanity checks here, because they're
// currently uninitialised.
for (i = 0; i < clo_max_snapshots; i++) {
@@ -2236,7 +2242,8 @@
init_alloc_fns();
// Initialise args_for_massif.
- args_for_massif = VG_(newXA)(VG_(malloc), VG_(free), sizeof(HChar*));
+ args_for_massif = VG_(newXA)(VG_(malloc), "ms.main.mprci.1",
+ VG_(free), sizeof(HChar*));
}
VG_DETERMINE_INTERFACE_VERSION(ms_pre_clo_init)
|
|
From: <sv...@va...> - 2008-09-06 18:55:32
|
Author: sewardj
Date: 2008-09-06 19:55:41 +0100 (Sat, 06 Sep 2008)
New Revision: 8567
Log:
Add cost-center annotations to all allocation points in Omega.
Modified:
branches/YARD/exp-omega/o_main.c
Modified: branches/YARD/exp-omega/o_main.c
===================================================================
--- branches/YARD/exp-omega/o_main.c 2008-09-06 18:46:45 UTC (rev 8566)
+++ branches/YARD/exp-omega/o_main.c 2008-09-06 18:55:41 UTC (rev 8567)
@@ -399,7 +399,7 @@
/*
** We don't have a node for this address. Create one now.
*/
- o_lastPBitNode = VG_(malloc)( sizeof(PBitNode) );
+ o_lastPBitNode = VG_(malloc)( "om.ogPBN.1", sizeof(PBitNode) );
tl_assert(o_lastPBitNode);
VG_(memset)(o_lastPBitNode, 0, sizeof(PBitNode));
o_lastPBitNode->hdr.key = key;
@@ -903,7 +903,7 @@
/*
** Create a new block and add it to the leaked list.
*/
- item = VG_(malloc)(sizeof(BlockRecord));
+ item = VG_(malloc)("om.oaLB.1", sizeof(BlockRecord));
tl_assert(item);
item->count = 1;
@@ -1288,7 +1288,7 @@
if(!smb->pointers)
{
smb->pointers =
- VG_(malloc)((smb->refNum + 8) * sizeof(TrackedPointer *));
+ VG_(malloc)("om.oAMBR.1", (smb->refNum + 8) * sizeof(TrackedPointer *));
tl_assert(smb->pointers);
}
else if(!((smb->refNum + 1) & 7))
@@ -1298,7 +1298,8 @@
** Note that this will also shrink us if needed.
*/
smb->pointers =
- VG_(realloc)(smb->pointers, ((smb->refNum + 8) * sizeof(Addr)));
+ VG_(realloc)("om.oAMBR.2",
+ smb->pointers, ((smb->refNum + 8) * sizeof(Addr)));
tl_assert(smb->pointers);
}
@@ -1728,7 +1729,7 @@
/*
** Create a new shadow for the block.
*/
- smb = VG_(malloc)( sizeof(MemBlock) );
+ smb = VG_(malloc)( "om.osuS.1", sizeof(MemBlock) );
tl_assert(smb);
o_stats.shadowMemoryBlocksAllocated++;
@@ -1905,7 +1906,7 @@
*/
TrackedPointer *tp = VG_(HT_lookup)(o_TrackedPointers, TRACKED_KEY(address));
Int diff = dst - src;
- TrackedPointer *ntp = VG_(malloc)((sizeof(TrackedPointer)));
+ TrackedPointer *ntp = VG_(malloc)("om.odTP.1", (sizeof(TrackedPointer)));
MemBlock *mb = NULL;
tl_assert(tp);
@@ -1946,7 +1947,7 @@
static void o_createMemBlock(ThreadId tid, Addr start, SizeT size)
{
- MemBlock *mb = VG_(malloc)(sizeof(MemBlock));
+ MemBlock *mb = VG_(malloc)("om.ocMB.1", sizeof(MemBlock));
tl_assert(mb);
o_stats.memoryBlocksAllocated++;
@@ -2324,7 +2325,7 @@
/*
** No tracked pointer - create one now.
*/
- tp = VG_(malloc)(sizeof(TrackedPointer));
+ tp = VG_(malloc)("om.oD.1", sizeof(TrackedPointer));
tl_assert(tp);
o_stats.trackedPointersAllocated++;
o_stats.liveTrackedPointers++;
@@ -3160,7 +3161,7 @@
/*
** Create and populate the new node
*/
- tn = VG_(malloc)(sizeof(TreeNode));
+ tn = VG_(malloc)("om.obMbT.1", sizeof(TreeNode));
VG_(memset)(tn, 0, sizeof(TreeNode));
tn->start = mb->hdr.key;
@@ -3299,7 +3300,7 @@
/*
** Create a new block and add it to the circular records list.
*/
- BlockRecord *item = VG_(malloc)(sizeof(BlockRecord));
+ BlockRecord *item = VG_(malloc)("om.orCB.1", sizeof(BlockRecord));
tl_assert(item);
item->count = 1;
|
|
From: <sv...@va...> - 2008-09-06 18:46:34
|
Author: sewardj
Date: 2008-09-06 19:46:45 +0100 (Sat, 06 Sep 2008)
New Revision: 8566
Log:
Add cost-center annotations to all allocation points in Cachegrind.
Modified:
branches/YARD/cachegrind/cg_main.c
branches/YARD/cachegrind/cg_sim.c
Modified: branches/YARD/cachegrind/cg_main.c
===================================================================
--- branches/YARD/cachegrind/cg_main.c 2008-09-06 18:45:50 UTC (rev 8565)
+++ branches/YARD/cachegrind/cg_main.c 2008-09-06 18:46:45 UTC (rev 8566)
@@ -196,7 +196,7 @@
return *s_ptr;
} else {
Char** s_node = VG_(OSetGen_AllocNode)(stringTable, sizeof(Char*));
- *s_node = VG_(strdup)(s);
+ *s_node = VG_(strdup)("cg.main.gps.1", s);
VG_(OSetGen_Insert)(stringTable, s_node);
return *s_node;
}
@@ -1762,15 +1762,18 @@
CC_table =
VG_(OSetGen_Create)(offsetof(LineCC, loc),
cmp_CodeLoc_LineCC,
- VG_(malloc), VG_(free));
+ VG_(malloc), "cg.main.cpci.1",
+ VG_(free));
instrInfoTable =
VG_(OSetGen_Create)(/*keyOff*/0,
NULL,
- VG_(malloc), VG_(free));
+ VG_(malloc), "cg.main.cpci.2",
+ VG_(free));
stringTable =
VG_(OSetGen_Create)(/*keyOff*/0,
stringCmp,
- VG_(malloc), VG_(free));
+ VG_(malloc), "cg.main.cpci.3",
+ VG_(free));
configure_caches(&I1c, &D1c, &L2c);
Modified: branches/YARD/cachegrind/cg_sim.c
===================================================================
--- branches/YARD/cachegrind/cg_sim.c 2008-09-06 18:45:50 UTC (rev 8565)
+++ branches/YARD/cachegrind/cg_sim.c 2008-09-06 18:46:45 UTC (rev 8566)
@@ -74,7 +74,8 @@
c->size, c->line_size, c->assoc);
}
- c->tags = VG_(malloc)(sizeof(UWord) * c->sets * c->assoc);
+ c->tags = VG_(malloc)("cg.sim.ci.1",
+ sizeof(UWord) * c->sets * c->assoc);
for (i = 0; i < c->sets * c->assoc; i++)
c->tags[i] = 0;
|
|
From: <sv...@va...> - 2008-09-06 18:45:41
|
Author: sewardj
Date: 2008-09-06 19:45:50 +0100 (Sat, 06 Sep 2008)
New Revision: 8565
Log:
Add cost-center annotations to all allocation points in Callgrind.
Modified:
branches/YARD/callgrind/bb.c
branches/YARD/callgrind/bbcc.c
branches/YARD/callgrind/callstack.c
branches/YARD/callgrind/clo.c
branches/YARD/callgrind/command.c
branches/YARD/callgrind/context.c
branches/YARD/callgrind/costs.c
branches/YARD/callgrind/debug.c
branches/YARD/callgrind/dump.c
branches/YARD/callgrind/events.c
branches/YARD/callgrind/fn.c
branches/YARD/callgrind/global.h
branches/YARD/callgrind/jumps.c
branches/YARD/callgrind/sim.c
branches/YARD/callgrind/threads.c
Modified: branches/YARD/callgrind/bb.c
===================================================================
--- branches/YARD/callgrind/bb.c 2008-09-06 18:38:07 UTC (rev 8564)
+++ branches/YARD/callgrind/bb.c 2008-09-06 18:45:50 UTC (rev 8565)
@@ -41,7 +41,8 @@
bbs.size = 8437;
bbs.entries = 0;
- bbs.table = (BB**) CLG_MALLOC(bbs.size * sizeof(BB*));
+ bbs.table = (BB**) CLG_MALLOC("cl.bb.ibh.1",
+ bbs.size * sizeof(BB*));
for (i = 0; i < bbs.size; i++) bbs.table[i] = NULL;
}
@@ -70,7 +71,8 @@
UInt new_idx;
new_size = 2* bbs.size +3;
- new_table = (BB**) CLG_MALLOC(new_size * sizeof(BB*));
+ new_table = (BB**) CLG_MALLOC("cl.bb.rbt.1",
+ new_size * sizeof(BB*));
if (!new_table) return;
@@ -129,7 +131,7 @@
size = sizeof(BB) + instr_count * sizeof(InstrInfo)
+ (cjmp_count+1) * sizeof(CJmpInfo);
- new = (BB*) CLG_MALLOC(size);
+ new = (BB*) CLG_MALLOC("cl.bb.nb.1", size);
VG_(memset)(new, 0, size);
new->obj = obj;
Modified: branches/YARD/callgrind/bbcc.c
===================================================================
--- branches/YARD/callgrind/bbcc.c 2008-09-06 18:38:07 UTC (rev 8564)
+++ branches/YARD/callgrind/bbcc.c 2008-09-06 18:45:50 UTC (rev 8565)
@@ -48,7 +48,8 @@
bbccs->size = N_BBCC_INITIAL_ENTRIES;
bbccs->entries = 0;
- bbccs->table = (BBCC**) CLG_MALLOC(bbccs->size * sizeof(BBCC*));
+ bbccs->table = (BBCC**) CLG_MALLOC("cl.bbcc.ibh.1",
+ bbccs->size * sizeof(BBCC*));
for (i = 0; i < bbccs->size; i++) bbccs->table[i] = NULL;
}
@@ -197,7 +198,8 @@
BBCC *curr_BBCC, *next_BBCC;
new_size = 2*current_bbccs.size+3;
- new_table = (BBCC**) CLG_MALLOC(new_size * sizeof(BBCC*));
+ new_table = (BBCC**) CLG_MALLOC("cl.bbcc.rbh.1",
+ new_size * sizeof(BBCC*));
if (!new_table) return;
@@ -246,7 +248,7 @@
BBCC** bbccs;
int i;
- bbccs = (BBCC**) CLG_MALLOC(sizeof(BBCC*) * size);
+ bbccs = (BBCC**) CLG_MALLOC("cl.bbcc.nr.1", sizeof(BBCC*) * size);
for(i=0;i<size;i++)
bbccs[i] = 0;
@@ -271,7 +273,8 @@
/* We need cjmp_count+1 JmpData structs:
* the last is for the unconditional jump/call/ret at end of BB
*/
- new = (BBCC*)CLG_MALLOC(sizeof(BBCC) +
+ new = (BBCC*)CLG_MALLOC("cl.bbcc.nb.1",
+ sizeof(BBCC) +
(bb->cjmp_count+1) * sizeof(JmpData));
new->bb = bb;
new->tid = CLG_(current_tid);
Modified: branches/YARD/callgrind/callstack.c
===================================================================
--- branches/YARD/callgrind/callstack.c 2008-09-06 18:38:07 UTC (rev 8564)
+++ branches/YARD/callgrind/callstack.c 2008-09-06 18:45:50 UTC (rev 8565)
@@ -52,7 +52,8 @@
CLG_ASSERT(s != 0);
s->size = N_CALL_STACK_INITIAL_ENTRIES;
- s->entry = (call_entry*) CLG_MALLOC(s->size * sizeof(call_entry));
+ s->entry = (call_entry*) CLG_MALLOC("cl.callstack.ics.1",
+ s->size * sizeof(call_entry));
s->sp = 0;
s->entry[0].cxt = 0; /* for assertion in push_cxt() */
@@ -96,7 +97,8 @@
cs->size *= 2;
while (i > cs->size) cs->size *= 2;
- cs->entry = (call_entry*) VG_(realloc)(cs->entry,
+ cs->entry = (call_entry*) VG_(realloc)("cl.callstack.ess.1",
+ cs->entry,
cs->size * sizeof(call_entry));
for(i=oldsize; i<cs->size; i++)
Modified: branches/YARD/callgrind/clo.c
===================================================================
--- branches/YARD/callgrind/clo.c 2008-09-06 18:38:07 UTC (rev 8564)
+++ branches/YARD/callgrind/clo.c 2008-09-06 18:45:50 UTC (rev 8565)
@@ -98,7 +98,8 @@
static __inline__
fn_config* new_fnc(void)
{
- fn_config* new = (fn_config*) CLG_MALLOC(sizeof(fn_config));
+ fn_config* new = (fn_config*) CLG_MALLOC("cl.clo.nf.1",
+ sizeof(fn_config));
new->dump_before = CONFIG_DEFAULT;
new->dump_after = CONFIG_DEFAULT;
@@ -121,7 +122,8 @@
static config_node* new_config(Char* name, int length)
{
int i;
- config_node* node = (config_node*) CLG_MALLOC(sizeof(config_node) + length);
+ config_node* node = (config_node*) CLG_MALLOC("cl.clo.nc.1",
+ sizeof(config_node) + length);
for(i=0;i<length;i++) {
if (name[i] == 0) break;
@@ -588,7 +590,7 @@
}
else if (0 == VG_(strncmp)(arg, "--callgrind-out-file=", 21))
- CLG_(clo).out_format = VG_(strdup)(arg+21);
+ CLG_(clo).out_format = VG_(strdup)("cl.clo.pclo.1", arg+21);
else if (0 == VG_(strcmp)(arg, "--mangle-names=yes"))
CLG_(clo).mangle_names = True;
Modified: branches/YARD/callgrind/command.c
===================================================================
--- branches/YARD/callgrind/command.c 2008-09-06 18:38:07 UTC (rev 8564)
+++ branches/YARD/callgrind/command.c 2008-09-06 18:45:50 UTC (rev 8565)
@@ -67,7 +67,7 @@
/* name of command file */
size = VG_(strlen)(dir) + VG_(strlen)(DEFAULT_COMMANDNAME) +10;
- command_file = (char*) CLG_MALLOC(size);
+ command_file = (char*) CLG_MALLOC("cl.command.sc.1", size);
CLG_ASSERT(command_file != 0);
VG_(sprintf)(command_file, "%s/%s.%d",
dir, DEFAULT_COMMANDNAME, thisPID);
@@ -76,13 +76,13 @@
* KCachegrind releases, as it doesn't use ".pid" to distinguish
* different callgrind instances from same base directory.
*/
- command_file2 = (char*) CLG_MALLOC(size);
+ command_file2 = (char*) CLG_MALLOC("cl.command.sc.2", size);
CLG_ASSERT(command_file2 != 0);
VG_(sprintf)(command_file2, "%s/%s",
dir, DEFAULT_COMMANDNAME);
size = VG_(strlen)(dir) + VG_(strlen)(DEFAULT_RESULTNAME) +10;
- result_file = (char*) CLG_MALLOC(size);
+ result_file = (char*) CLG_MALLOC("cl.command.sc.3", size);
CLG_ASSERT(result_file != 0);
VG_(sprintf)(result_file, "%s/%s.%d",
dir, DEFAULT_RESULTNAME, thisPID);
@@ -90,12 +90,13 @@
/* If we get a command from a command file without .pid, use
* a result file without .pid suffix
*/
- result_file2 = (char*) CLG_MALLOC(size);
+ result_file2 = (char*) CLG_MALLOC("cl.command.sc.4", size);
CLG_ASSERT(result_file2 != 0);
VG_(sprintf)(result_file2, "%s/%s",
dir, DEFAULT_RESULTNAME);
- info_file = (char*) CLG_MALLOC(VG_(strlen)(DEFAULT_INFONAME) + 10);
+ info_file = (char*) CLG_MALLOC("cl.command.sc.5",
+ VG_(strlen)(DEFAULT_INFONAME) + 10);
CLG_ASSERT(info_file != 0);
VG_(sprintf)(info_file, "%s.%d", DEFAULT_INFONAME, thisPID);
Modified: branches/YARD/callgrind/context.c
===================================================================
--- branches/YARD/callgrind/context.c 2008-09-06 18:38:07 UTC (rev 8564)
+++ branches/YARD/callgrind/context.c 2008-09-06 18:45:50 UTC (rev 8565)
@@ -43,7 +43,8 @@
CLG_ASSERT(s != 0);
s->size = N_FNSTACK_INITIAL_ENTRIES;
- s->bottom = (fn_node**) CLG_MALLOC(s->size * sizeof(fn_node*));
+ s->bottom = (fn_node**) CLG_MALLOC("cl.context.ifs.1",
+ s->size * sizeof(fn_node*));
s->top = s->bottom;
s->bottom[0] = 0;
}
@@ -74,7 +75,8 @@
cxts.size = N_CXT_INITIAL_ENTRIES;
cxts.entries = 0;
- cxts.table = (Context**) CLG_MALLOC(cxts.size * sizeof(Context*));
+ cxts.table = (Context**) CLG_MALLOC("cl.context.ict.1",
+ cxts.size * sizeof(Context*));
for (i = 0; i < cxts.size; i++)
cxts.table[i] = 0;
@@ -93,7 +95,8 @@
UInt new_idx;
new_size = 2* cxts.size +3;
- new_table = (Context**) CLG_MALLOC(new_size * sizeof(Context*));
+ new_table = (Context**) CLG_MALLOC("cl.context.rct.1",
+ new_size * sizeof(Context*));
if (!new_table) return;
@@ -190,7 +193,8 @@
if (10 * cxts.entries / cxts.size > 8)
resize_cxt_table();
- new = (Context*) CLG_MALLOC(sizeof(Context)+sizeof(fn_node*)*size);
+ new = (Context*) CLG_MALLOC("cl.context.nc.1",
+ sizeof(Context)+sizeof(fn_node*)*size);
// hash value calculation similar to cxt_hash_val(), but additionally
// copying function pointers in one run
@@ -298,7 +302,8 @@
fn_entries = CLG_(current_fn_stack).top - CLG_(current_fn_stack).bottom;
if (fn_entries == CLG_(current_fn_stack).size-1) {
int new_size = CLG_(current_fn_stack).size *2;
- fn_node** new = (fn_node**) CLG_MALLOC(new_size * sizeof(fn_node*));
+ fn_node** new = (fn_node**) CLG_MALLOC("cl.context.pc.1",
+ new_size * sizeof(fn_node*));
int i;
for(i=0;i<CLG_(current_fn_stack).size;i++)
new[i] = CLG_(current_fn_stack).bottom[i];
Modified: branches/YARD/callgrind/costs.c
===================================================================
--- branches/YARD/callgrind/costs.c 2008-09-06 18:38:07 UTC (rev 8564)
+++ branches/YARD/callgrind/costs.c 2008-09-06 18:45:50 UTC (rev 8565)
@@ -43,7 +43,8 @@
if (!cost_chunk_current ||
(cost_chunk_current->size - cost_chunk_current->used < size)) {
- CostChunk* cc = (CostChunk*) CLG_MALLOC(sizeof(CostChunk) +
+ CostChunk* cc = (CostChunk*) CLG_MALLOC("cl.costs.gc.1",
+ sizeof(CostChunk) +
COSTCHUNK_SIZE * sizeof(ULong));
cc->size = COSTCHUNK_SIZE;
cc->used = 0;
Modified: branches/YARD/callgrind/debug.c
===================================================================
--- branches/YARD/callgrind/debug.c 2008-09-06 18:38:07 UTC (rev 8564)
+++ branches/YARD/callgrind/debug.c 2008-09-06 18:45:50 UTC (rev 8565)
@@ -429,10 +429,10 @@
VG_(printf)("\n");
}
-void* CLG_(malloc)(UWord s, char* f)
+void* CLG_(malloc)(HChar* cc, UWord s, char* f)
{
CLG_DEBUG(3, "Malloc(%lu) in %s.\n", s, f);
- return VG_(malloc)(s);
+ return VG_(malloc)(cc,s);
}
#else /* CLG_ENABLE_DEBUG */
Modified: branches/YARD/callgrind/dump.c
===================================================================
--- branches/YARD/callgrind/dump.c 2008-09-06 18:38:07 UTC (rev 8564)
+++ branches/YARD/callgrind/dump.c 2008-09-06 18:45:50 UTC (rev 8565)
@@ -105,7 +105,8 @@
CLG_(stat).distinct_fns +
CLG_(stat).context_counter;
CLG_ASSERT(dump_array == 0);
- dump_array = (Bool*) CLG_MALLOC(dump_array_size * sizeof(Bool));
+ dump_array = (Bool*) CLG_MALLOC("cl.dump.ida.1",
+ dump_array_size * sizeof(Bool));
obj_dumped = dump_array;
file_dumped = obj_dumped + CLG_(stat).distinct_objs;
fn_dumped = file_dumped + CLG_(stat).distinct_files;
@@ -1218,7 +1219,8 @@
/* allocate bbcc array, insert BBCCs and sort */
prepare_ptr = array =
- (BBCC**) CLG_MALLOC((prepare_count+1) * sizeof(BBCC*));
+ (BBCC**) CLG_MALLOC("cl.dump.pd.1",
+ (prepare_count+1) * sizeof(BBCC*));
CLG_(forall_bbccs)(hash_addPtr);
@@ -1693,12 +1695,13 @@
i++;
}
i = lastSlash;
- out_directory = (Char*) CLG_MALLOC(i+1);
+ out_directory = (Char*) CLG_MALLOC("cl.dump.init_dumps.1", i+1);
VG_(strncpy)(out_directory, out_file, i);
out_directory[i] = 0;
/* allocate space big enough for final filenames */
- filename = (Char*) CLG_MALLOC(VG_(strlen)(out_file)+32);
+ filename = (Char*) CLG_MALLOC("cl.dump.init_dumps.2",
+ VG_(strlen)(out_file)+32);
CLG_ASSERT(filename != 0);
/* Make sure the output base file can be written.
Modified: branches/YARD/callgrind/events.c
===================================================================
--- branches/YARD/callgrind/events.c 2008-09-06 18:38:07 UTC (rev 8564)
+++ branches/YARD/callgrind/events.c 2008-09-06 18:45:50 UTC (rev 8565)
@@ -46,7 +46,7 @@
et = &(eventtype[eventtype_count]);
et->id = eventtype_count;
- et->name = (UChar*) VG_(strdup)(name);
+ et->name = (UChar*) VG_(strdup)("cl.events.re.1", name);
et->description = 0;
eventtype_count++;
@@ -77,7 +77,8 @@
{
EventSet* es;
- es = (EventSet*) CLG_MALLOC(sizeof(EventSet) +
+ es = (EventSet*) CLG_MALLOC("cl.events.geSet.1",
+ sizeof(EventSet) +
capacity * sizeof(EventSetEntry));
es->capacity = capacity;
es->size = 0;
@@ -499,7 +500,8 @@
CLG_ASSERT(es != 0);
- em = (EventMapping*) CLG_MALLOC(sizeof(EventMapping) +
+ em = (EventMapping*) CLG_MALLOC("cl.events.geMapping.1",
+ sizeof(EventMapping) +
es->capacity * sizeof(Int));
em->capacity = es->capacity;
em->size = 0;
Modified: branches/YARD/callgrind/fn.c
===================================================================
--- branches/YARD/callgrind/fn.c 2008-09-06 18:38:07 UTC (rev 8564)
+++ branches/YARD/callgrind/fn.c 2008-09-06 18:45:50 UTC (rev 8565)
@@ -186,8 +186,8 @@
Int i;
obj_node* new;
- new = (obj_node*) CLG_MALLOC(sizeof(obj_node));
- new->name = di ? VG_(strdup)( VG_(seginfo_filename)(di) )
+ new = (obj_node*) CLG_MALLOC("cl.fn.non.1", sizeof(obj_node));
+ new->name = di ? VG_(strdup)( "cl.fn.non.2",VG_(seginfo_filename)(di) )
: anonymous_obj;
for (i = 0; i < N_FILE_ENTRIES; i++) {
new->files[i] = NULL;
@@ -244,8 +244,9 @@
obj_node* obj, file_node* next)
{
Int i;
- file_node* new = (file_node*) CLG_MALLOC(sizeof(file_node));
- new->name = VG_(strdup)(filename);
+ file_node* new = (file_node*) CLG_MALLOC("cl.fn.nfn.1",
+ sizeof(file_node));
+ new->name = VG_(strdup)("cl.fn.nfn.2", filename);
for (i = 0; i < N_FN_ENTRIES; i++) {
new->fns[i] = NULL;
}
@@ -286,8 +287,9 @@
fn_node* new_fn_node(Char fnname[FILENAME_LEN],
file_node* file, fn_node* next)
{
- fn_node* new = (fn_node*) CLG_MALLOC(sizeof(fn_node));
- new->name = VG_(strdup)(fnname);
+ fn_node* new = (fn_node*) CLG_MALLOC("cl.fn.nfnnd.1",
+ sizeof(fn_node));
+ new->name = VG_(strdup)("cl.fn.nfnnd.2", fnname);
CLG_(stat).distinct_fns++;
new->number = CLG_(stat).distinct_fns;
@@ -574,7 +576,8 @@
if (a->size <= CLG_(stat).distinct_fns)
a->size = CLG_(stat).distinct_fns+1;
- a->array = (UInt*) CLG_MALLOC(a->size * sizeof(UInt));
+ a->array = (UInt*) CLG_MALLOC("cl.fn.gfe.1",
+ a->size * sizeof(UInt));
for(i=0;i<a->size;i++)
a->array[i] = 0;
}
@@ -617,7 +620,7 @@
CLG_DEBUG(0, "Resize fn_active_array: %d => %d\n",
current_fn_active.size, newsize);
- new = (UInt*) CLG_MALLOC(newsize * sizeof(UInt));
+ new = (UInt*) CLG_MALLOC("cl.fn.rfa.1", newsize * sizeof(UInt));
for(i=0;i<current_fn_active.size;i++)
new[i] = current_fn_active.array[i];
while(i<newsize)
Modified: branches/YARD/callgrind/global.h
===================================================================
--- branches/YARD/callgrind/global.h 2008-09-06 18:38:07 UTC (rev 8564)
+++ branches/YARD/callgrind/global.h 2008-09-06 18:45:50 UTC (rev 8565)
@@ -868,14 +868,14 @@
void CLG_(print_addr)(Addr addr);
void CLG_(print_addr_ln)(Addr addr);
-void* CLG_(malloc)(UWord s, char* f);
+void* CLG_(malloc)(HChar* cc, UWord s, char* f);
void* CLG_(free)(void* p, char* f);
#if 0
-#define CLG_MALLOC(x) CLG_(malloc)(x,__FUNCTION__)
-#define CLG_FREE(p) CLG_(free)(p,__FUNCTION__)
+#define CLG_MALLOC(_cc,x) CLG_(malloc)((_cc),x,__FUNCTION__)
+#define CLG_FREE(p) CLG_(free)(p,__FUNCTION__)
#else
-#define CLG_MALLOC(x) VG_(malloc)(x)
-#define CLG_FREE(p) VG_(free)(p)
+#define CLG_MALLOC(_cc,x) VG_(malloc)((_cc),x)
+#define CLG_FREE(p) VG_(free)(p)
#endif
#endif /* CLG_GLOBAL */
Modified: branches/YARD/callgrind/jumps.c
===================================================================
--- branches/YARD/callgrind/jumps.c 2008-09-06 18:38:07 UTC (rev 8564)
+++ branches/YARD/callgrind/jumps.c 2008-09-06 18:45:50 UTC (rev 8565)
@@ -46,7 +46,8 @@
jccs->size = N_JCC_INITIAL_ENTRIES;
jccs->entries = 0;
- jccs->table = (jCC**) CLG_MALLOC(jccs->size * sizeof(jCC*));
+ jccs->table = (jCC**) CLG_MALLOC("cl.jumps.ijh.1",
+ jccs->size * sizeof(jCC*));
jccs->spontaneous = 0;
for (i = 0; i < jccs->size; i++)
@@ -89,7 +90,8 @@
jCC *curr_jcc, *next_jcc;
new_size = 2* current_jccs.size +3;
- new_table = (jCC**) CLG_MALLOC(new_size * sizeof(jCC*));
+ new_table = (jCC**) CLG_MALLOC("cl.jumps.rjt.1",
+ new_size * sizeof(jCC*));
if (!new_table) return;
@@ -145,7 +147,7 @@
if (10 * current_jccs.entries / current_jccs.size > 8)
resize_jcc_table();
- new = (jCC*) CLG_MALLOC(sizeof(jCC));
+ new = (jCC*) CLG_MALLOC("cl.jumps.nj.1", sizeof(jCC));
new->from = from;
new->jmp = jmp;
Modified: branches/YARD/callgrind/sim.c
===================================================================
--- branches/YARD/callgrind/sim.c 2008-09-06 18:38:07 UTC (rev 8564)
+++ branches/YARD/callgrind/sim.c 2008-09-06 18:45:50 UTC (rev 8565)
@@ -214,7 +214,8 @@
c->sectored ? ", sectored":"");
}
- c->tags = (UWord*) CLG_MALLOC(sizeof(UWord) * c->sets * c->assoc);
+ c->tags = (UWord*) CLG_MALLOC("cl.sim.cs_ic.1",
+ sizeof(UWord) * c->sets * c->assoc);
if (clo_collect_cacheuse)
cacheuse_initcache(c);
else
@@ -611,12 +612,15 @@
unsigned int start_mask, start_val;
unsigned int end_mask, end_val;
- c->use = CLG_MALLOC(sizeof(line_use) * c->sets * c->assoc);
- c->loaded = CLG_MALLOC(sizeof(line_loaded) * c->sets * c->assoc);
- c->line_start_mask = CLG_MALLOC(sizeof(int) * c->line_size);
- c->line_end_mask = CLG_MALLOC(sizeof(int) * c->line_size);
+ c->use = CLG_MALLOC("cl.sim.cu_ic.1",
+ sizeof(line_use) * c->sets * c->assoc);
+ c->loaded = CLG_MALLOC("cl.sim.cu_ic.2",
+ sizeof(line_loaded) * c->sets * c->assoc);
+ c->line_start_mask = CLG_MALLOC("cl.sim.cu_ic.3",
+ sizeof(int) * c->line_size);
+ c->line_end_mask = CLG_MALLOC("cl.sim.cu_ic.4",
+ sizeof(int) * c->line_size);
-
c->line_size_mask = c->line_size-1;
/* Meaning of line_start_mask/line_end_mask
@@ -1614,7 +1618,7 @@
{
int i1, i2, i3;
int i;
- char *opt = VG_(strdup)(orig_opt);
+ char *opt = VG_(strdup)("cl.sim.po.1", orig_opt);
i = i1 = opt_len;
Modified: branches/YARD/callgrind/threads.c
===================================================================
--- branches/YARD/callgrind/threads.c 2008-09-06 18:38:07 UTC (rev 8564)
+++ branches/YARD/callgrind/threads.c 2008-09-06 18:45:50 UTC (rev 8565)
@@ -100,7 +100,8 @@
{
thread_info* t;
- t = (thread_info*) CLG_MALLOC(sizeof(thread_info));
+ t = (thread_info*) CLG_MALLOC("cl.threads.nt.1",
+ sizeof(thread_info));
/* init state */
CLG_(init_exec_stack)( &(t->states) );
@@ -323,7 +324,8 @@
static exec_state* new_exec_state(Int sigNum)
{
exec_state* es;
- es = (exec_state*) CLG_MALLOC(sizeof(exec_state));
+ es = (exec_state*) CLG_MALLOC("cl.threads.nes.1",
+ sizeof(exec_state));
/* allocate real cost space: needed as incremented by
* simulation functions */
|
|
From: <sv...@va...> - 2008-09-06 18:37:57
|
Author: sewardj
Date: 2008-09-06 19:38:07 +0100 (Sat, 06 Sep 2008)
New Revision: 8564
Log:
Add cost-center annotations to all allocation points in Drd.
Modified:
branches/YARD/drd/
branches/YARD/drd/drd_barrier.c
branches/YARD/drd/drd_bitmap.c
branches/YARD/drd/drd_clientobj.c
branches/YARD/drd/drd_error.c
branches/YARD/drd/drd_main.c
branches/YARD/drd/drd_malloc_wrappers.c
branches/YARD/drd/drd_rwlock.c
branches/YARD/drd/drd_segment.c
branches/YARD/drd/drd_vc.c
branches/YARD/drd/tests/drd_bitmap_test.c
branches/YARD/drd/tests/pth_cond_race3.stderr.exp
branches/YARD/drd/tests/pth_cond_race3.vgtest
branches/YARD/drd/tests/tc09_bad_unlock.stderr.exp-glibc2.8
Property changes on: branches/YARD/drd
___________________________________________________________________
Name: svn:mergeinfo
-
Modified: branches/YARD/drd/drd_barrier.c
===================================================================
--- branches/YARD/drd/drd_barrier.c 2008-09-06 18:36:13 UTC (rev 8563)
+++ branches/YARD/drd/drd_barrier.c 2008-09-06 18:38:07 UTC (rev 8564)
@@ -112,7 +112,8 @@
tl_assert(sizeof(((struct barrier_thread_info*)0)->tid) == sizeof(Word));
tl_assert(sizeof(((struct barrier_thread_info*)0)->tid)
>= sizeof(DrdThreadId));
- p->oset = VG_(OSetGen_Create)(0, 0, VG_(malloc), VG_(free));
+ p->oset = VG_(OSetGen_Create)(0, 0, VG_(malloc), "drd.barrier.bi.1",
+ VG_(free));
}
/** Deallocate the memory allocated by barrier_initialize() and in p->oset.
Modified: branches/YARD/drd/drd_bitmap.c
===================================================================
--- branches/YARD/drd/drd_bitmap.c 2008-09-06 18:36:13 UTC (rev 8563)
+++ branches/YARD/drd/drd_bitmap.c 2008-09-06 18:38:07 UTC (rev 8564)
@@ -63,7 +63,7 @@
/* in drd_bitmap.h. */
tl_assert((1 << BITS_PER_BITS_PER_UWORD) == BITS_PER_UWORD);
- bm = VG_(malloc)(sizeof(*bm));
+ bm = VG_(malloc)("drd.bitmap.bn.1", sizeof(*bm));
tl_assert(bm);
/* Cache initialization. a1 is initialized with a value that never can */
/* match any valid address: the upper ADDR0_BITS bits of a1 are always */
@@ -73,7 +73,8 @@
bm->cache[i].a1 = ~(UWord)1;
bm->cache[i].bm2 = 0;
}
- bm->oset = VG_(OSetGen_Create)(0, 0, VG_(malloc), VG_(free));
+ bm->oset = VG_(OSetGen_Create)(0, 0, VG_(malloc), "drd.bitmap.bn.2",
+ VG_(free));
s_bitmap_creation_count++;
@@ -917,7 +918,7 @@
{
struct bitmap2* bm2;
- bm2 = VG_(malloc)(sizeof(*bm2));
+ bm2 = VG_(malloc)("drd.bitmap.bm2n.1", sizeof(*bm2));
bm2->addr = a1;
bm2->refcnt = 1;
Modified: branches/YARD/drd/drd_clientobj.c
===================================================================
--- branches/YARD/drd/drd_clientobj.c 2008-09-06 18:36:13 UTC (rev 8563)
+++ branches/YARD/drd/drd_clientobj.c 2008-09-06 18:38:07 UTC (rev 8564)
@@ -53,7 +53,8 @@
void clientobj_init(void)
{
tl_assert(s_clientobj == 0);
- s_clientobj = VG_(OSetGen_Create)(0, 0, VG_(malloc), VG_(free));
+ s_clientobj = VG_(OSetGen_Create)(0, 0, VG_(malloc), "drd.clientobj.ci.1",
+ VG_(free));
tl_assert(s_clientobj);
}
Modified: branches/YARD/drd/drd_error.c
===================================================================
--- branches/YARD/drd/drd_error.c 2008-09-06 18:36:13 UTC (rev 8563)
+++ branches/YARD/drd/drd_error.c 2008-09-06 18:38:07 UTC (rev 8564)
@@ -95,8 +95,8 @@
{
AddrInfo ai;
const unsigned descr_size = 256;
- Char* descr1 = VG_(malloc)(descr_size);
- Char* descr2 = VG_(malloc)(descr_size);
+ Char* descr1 = VG_(malloc)("drd.error.drdr2.1", descr_size);
+ Char* descr2 = VG_(malloc)("drd.error.drdr2.2", descr_size);
tl_assert(dri);
tl_assert(dri->addr);
Modified: branches/YARD/drd/drd_main.c
===================================================================
--- branches/YARD/drd/drd_main.c 2008-09-06 18:36:13 UTC (rev 8563)
+++ branches/YARD/drd/drd_main.c 2008-09-06 18:38:07 UTC (rev 8564)
@@ -654,7 +654,7 @@
const unsigned msg_size = 256;
char* msg;
- msg = VG_(malloc)(msg_size);
+ msg = VG_(malloc)("drd.main.dptj.1", msg_size);
tl_assert(msg);
VG_(snprintf)(msg, msg_size,
"drd_post_thread_join joiner = %d/%d, joinee = %d/%d",
Modified: branches/YARD/drd/drd_malloc_wrappers.c
===================================================================
--- branches/YARD/drd/drd_malloc_wrappers.c 2008-09-06 18:36:13 UTC (rev 8563)
+++ branches/YARD/drd/drd_malloc_wrappers.c 2008-09-06 18:38:07 UTC (rev 8564)
@@ -70,7 +70,8 @@
static
DRD_Chunk* create_DRD_Chunk(ThreadId tid, Addr p, SizeT size)
{
- DRD_Chunk* mc = VG_(malloc)(sizeof(DRD_Chunk));
+ DRD_Chunk* mc = VG_(malloc)("drd.malloc_wrappers.cDC.1",
+ sizeof(DRD_Chunk));
mc->data = p;
mc->size = size;
mc->where = VG_(record_ExeContext)(tid, 0);
Modified: branches/YARD/drd/drd_rwlock.c
===================================================================
--- branches/YARD/drd/drd_rwlock.c 2008-09-06 18:36:13 UTC (rev 8563)
+++ branches/YARD/drd/drd_rwlock.c 2008-09-06 18:38:07 UTC (rev 8564)
@@ -180,7 +180,8 @@
tl_assert(p->type == ClientRwlock);
p->cleanup = (void(*)(DrdClientobj*))&rwlock_cleanup;
- p->thread_info = VG_(OSetGen_Create)(0, 0, VG_(malloc), VG_(free));
+ p->thread_info = VG_(OSetGen_Create)(
+ 0, 0, VG_(malloc), "drd.rwlock.ri.1", VG_(free));
p->acquiry_time_ms = 0;
p->acquired_at = 0;
}
Modified: branches/YARD/drd/drd_segment.c
===================================================================
--- branches/YARD/drd/drd_segment.c 2008-09-06 18:36:13 UTC (rev 8563)
+++ branches/YARD/drd/drd_segment.c 2008-09-06 18:38:07 UTC (rev 8564)
@@ -116,7 +116,7 @@
if (s_max_alive_segments_count < s_alive_segments_count)
s_max_alive_segments_count = s_alive_segments_count;
- sg = VG_(malloc)(sizeof(*sg));
+ sg = VG_(malloc)("drd.segment.sn.1", sizeof(*sg));
tl_assert(sg);
sg_init(sg, creator, created);
return sg;
Modified: branches/YARD/drd/drd_vc.c
===================================================================
--- branches/YARD/drd/drd_vc.c 2008-09-06 18:36:13 UTC (rev 8563)
+++ branches/YARD/drd/drd_vc.c 2008-09-06 18:38:07 UTC (rev 8564)
@@ -320,11 +320,13 @@
{
if (vc->vc)
{
- vc->vc = VG_(realloc)(vc->vc, new_capacity * sizeof(vc->vc[0]));
+ vc->vc = VG_(realloc)("drd.vc.vr.1",
+ vc->vc, new_capacity * sizeof(vc->vc[0]));
}
else if (new_capacity > 0)
{
- vc->vc = VG_(malloc)(new_capacity * sizeof(vc->vc[0]));
+ vc->vc = VG_(malloc)("drd.vc.vr.2",
+ new_capacity * sizeof(vc->vc[0]));
}
else
{
Modified: branches/YARD/drd/tests/drd_bitmap_test.c
===================================================================
--- branches/YARD/drd/tests/drd_bitmap_test.c 2008-09-06 18:36:13 UTC (rev 8563)
+++ branches/YARD/drd/tests/drd_bitmap_test.c 2008-09-06 18:38:07 UTC (rev 8564)
@@ -10,7 +10,7 @@
/* Replacements for core functionality. */
-void* VG_(malloc)(SizeT nbytes)
+void* VG_(malloc)(HChar* cc, SizeT nbytes)
{ return malloc(nbytes); }
void VG_(free)(void* p)
{ return free(p); }
Property changes on: branches/YARD/drd/tests/pth_cond_race3.stderr.exp
___________________________________________________________________
Name: svn:mergeinfo
-
Property changes on: branches/YARD/drd/tests/pth_cond_race3.vgtest
___________________________________________________________________
Name: svn:mergeinfo
-
Property changes on: branches/YARD/drd/tests/tc09_bad_unlock.stderr.exp-glibc2.8
___________________________________________________________________
Name: svn:mergeinfo
-
|
|
From: <sv...@va...> - 2008-09-06 18:36:03
|
Author: sewardj
Date: 2008-09-06 19:36:13 +0100 (Sat, 06 Sep 2008)
New Revision: 8563
Log:
Add cost-center annotations to all allocation points in Memcheck.
Modified:
branches/YARD/memcheck/mc_errors.c
branches/YARD/memcheck/mc_leakcheck.c
branches/YARD/memcheck/mc_main.c
branches/YARD/memcheck/mc_malloc_wrappers.c
branches/YARD/memcheck/mc_translate.c
branches/YARD/memcheck/tests/malloc_free_fill.stderr.exp
branches/YARD/memcheck/tests/oset_test.c
Modified: branches/YARD/memcheck/mc_errors.c
===================================================================
--- branches/YARD/memcheck/mc_errors.c 2008-09-06 18:34:50 UTC (rev 8562)
+++ branches/YARD/memcheck/mc_errors.c 2008-09-06 18:36:13 UTC (rev 8563)
@@ -1240,7 +1240,7 @@
if (VG_(get_supp_kind)(su) == ParamSupp) {
eof = VG_(get_line) ( fd, buf, nBuf );
if (eof) return False;
- VG_(set_supp_string)(su, VG_(strdup)(buf));
+ VG_(set_supp_string)(su, VG_(strdup)("mc.resi.1", buf));
}
return True;
}
Modified: branches/YARD/memcheck/mc_leakcheck.c
===================================================================
--- branches/YARD/memcheck/mc_leakcheck.c 2008-09-06 18:34:50 UTC (rev 8562)
+++ branches/YARD/memcheck/mc_leakcheck.c 2008-09-06 18:36:13 UTC (rev 8563)
@@ -85,7 +85,7 @@
n_starts = 1;
while (True) {
- starts = VG_(malloc)( n_starts * sizeof(Addr) );
+ starts = VG_(malloc)( "mc.gss.1", n_starts * sizeof(Addr) );
if (starts == NULL)
break;
r = VG_(am_get_segment_starts)( starts, n_starts );
@@ -469,7 +469,7 @@
p->indirect_bytes += lc_markstack[i].indirect;
} else {
n_lossrecords ++;
- p = VG_(malloc)(sizeof(LossRecord));
+ p = VG_(malloc)( "mc.fr.1", sizeof(LossRecord));
p->loss_mode = lc_markstack[i].state;
p->allocated_at = where;
p->total_bytes = lc_shadows[i]->szB;
@@ -608,7 +608,8 @@
VG_(ssort)((void*)mallocs, n_mallocs,
sizeof(VgHashNode*), lc_compar);
- malloc_chunk_holds_a_pool_chunk = VG_(calloc)( n_mallocs, sizeof(Bool) );
+ malloc_chunk_holds_a_pool_chunk = VG_(calloc)( "mc.fas.1",
+ n_mallocs, sizeof(Bool) );
*n_shadows = n_mallocs;
@@ -641,7 +642,7 @@
}
tl_assert(*n_shadows > 0);
- shadows = VG_(malloc)(sizeof(VgHashNode*) * (*n_shadows));
+ shadows = VG_(malloc)("mc.fas.2", sizeof(VgHashNode*) * (*n_shadows));
s = 0;
/* Copy the mempool chunks into the final array. */
@@ -738,7 +739,8 @@
lc_max_mallocd_addr = lc_shadows[lc_n_shadows-1]->data
+ lc_shadows[lc_n_shadows-1]->szB;
- lc_markstack = VG_(malloc)( lc_n_shadows * sizeof(*lc_markstack) );
+ lc_markstack = VG_(malloc)( "mc.ddml.1",
+ lc_n_shadows * sizeof(*lc_markstack) );
for (i = 0; i < lc_n_shadows; i++) {
lc_markstack[i].next = -1;
lc_markstack[i].state = Unreached;
Modified: branches/YARD/memcheck/mc_main.c
===================================================================
--- branches/YARD/memcheck/mc_main.c 2008-09-06 18:34:50 UTC (rev 8562)
+++ branches/YARD/memcheck/mc_main.c 2008-09-06 18:36:13 UTC (rev 8563)
@@ -399,7 +399,7 @@
tl_assert(sizeof(Addr) == sizeof(void*));
auxmap_L2 = VG_(OSetGen_Create)( /*keyOff*/ offsetof(AuxMapEnt,base),
/*fastCmp*/ NULL,
- VG_(malloc), VG_(free) );
+ VG_(malloc), "mc.iaLL.1", VG_(free) );
}
/* Check representation invariants; if OK return NULL; else a
@@ -891,7 +891,8 @@
{
return VG_(OSetGen_Create)( offsetof(SecVBitNode, a),
NULL, // use fast comparisons
- VG_(malloc), VG_(free) );
+ VG_(malloc), "mc.cSVT.1 (sec VBit table)",
+ VG_(free) );
}
static void gcSecVBitTable(void)
@@ -2151,8 +2152,8 @@
static OSet* ocacheL2 = NULL;
-static void* ocacheL2_malloc ( SizeT szB ) {
- return VG_(malloc)(szB);
+static void* ocacheL2_malloc ( HChar* cc, SizeT szB ) {
+ return VG_(malloc)(cc, szB);
}
static void ocacheL2_free ( void* v ) {
VG_(free)( v );
@@ -2169,7 +2170,7 @@
ocacheL2
= VG_(OSetGen_Create)( offsetof(OCacheLine,tag),
NULL, /* fast cmp */
- ocacheL2_malloc, ocacheL2_free );
+ ocacheL2_malloc, "mc.ioL2", ocacheL2_free );
tl_assert(ocacheL2);
stats__ocacheL2_n_nodes = 0;
}
@@ -4801,7 +4802,7 @@
tl_assert(cgb_used == cgb_size);
sz_new = (cgbs == NULL) ? 10 : (2 * cgb_size);
- cgbs_new = VG_(malloc)( sz_new * sizeof(CGenBlock) );
+ cgbs_new = VG_(malloc)( "mc.acb.1", sz_new * sizeof(CGenBlock) );
for (i = 0; i < cgb_used; i++)
cgbs_new[i] = cgbs[i];
@@ -4901,7 +4902,7 @@
/* VG_(printf)("allocated %d %p\n", i, cgbs); */
cgbs[i].start = arg[1];
cgbs[i].size = arg[2];
- cgbs[i].desc = VG_(strdup)((Char *)arg[3]);
+ cgbs[i].desc = VG_(strdup)("mc.mhcr.1", (Char *)arg[3]);
cgbs[i].where = VG_(record_ExeContext) ( tid, 0/*first_ip_delta*/ );
*ret = i;
Modified: branches/YARD/memcheck/mc_malloc_wrappers.c
===================================================================
--- branches/YARD/memcheck/mc_malloc_wrappers.c 2008-09-06 18:34:50 UTC (rev 8562)
+++ branches/YARD/memcheck/mc_malloc_wrappers.c 2008-09-06 18:36:13 UTC (rev 8563)
@@ -135,7 +135,7 @@
MC_Chunk* create_MC_Chunk ( ExeContext* ec, Addr p, SizeT szB,
MC_AllocKind kind)
{
- MC_Chunk* mc = VG_(malloc)(sizeof(MC_Chunk));
+ MC_Chunk* mc = VG_(malloc)("mc.cMC.1 (a MC_Chunk)", sizeof(MC_Chunk));
mc->data = p;
mc->szB = szB;
mc->allockind = kind;
@@ -501,7 +501,7 @@
VG_(tool_panic)("MC_(create_mempool): duplicate pool creation");
}
- mp = VG_(malloc)(sizeof(MC_Mempool));
+ mp = VG_(malloc)("mc.cm.1", sizeof(MC_Mempool));
mp->pool = pool;
mp->rzB = rzB;
mp->is_zeroed = is_zeroed;
Modified: branches/YARD/memcheck/mc_translate.c
===================================================================
--- branches/YARD/memcheck/mc_translate.c 2008-09-06 18:34:50 UTC (rev 8562)
+++ branches/YARD/memcheck/mc_translate.c 2008-09-06 18:36:13 UTC (rev 8563)
@@ -3787,7 +3787,8 @@
IRExpr* guard;
IRCallee* cee;
Bool alreadyPresent;
- XArray* pairs = VG_(newXA)( VG_(malloc), VG_(free), sizeof(Pair) );
+ XArray* pairs = VG_(newXA)( VG_(malloc), "mc.ft.1",
+ VG_(free), sizeof(Pair) );
/* Scan forwards through the statements. Each time a call to one
of the relevant helpers is seen, check if we have made a
previous call to the same helper using the same guard
Property changes on: branches/YARD/memcheck/tests/malloc_free_fill.stderr.exp
___________________________________________________________________
Name: svn:mergeinfo
-
Modified: branches/YARD/memcheck/tests/oset_test.c
===================================================================
--- branches/YARD/memcheck/tests/oset_test.c 2008-09-06 18:34:50 UTC (rev 8562)
+++ branches/YARD/memcheck/tests/oset_test.c 2008-09-06 18:36:13 UTC (rev 8563)
@@ -45,7 +45,7 @@
return seed;
}
-static void* allocate_node(SizeT szB)
+static void* allocate_node(HChar* cc, SizeT szB)
{ return malloc(szB); }
static void free_node(void* p)
@@ -84,7 +84,7 @@
// comparisons.
OSet* oset = VG_(OSetGen_Create)(0,
NULL,
- allocate_node, free_node);
+ allocate_node, "oset_test.1", free_node);
// Try some operations on an empty OSet to ensure they don't screw up.
vg_assert( ! VG_(OSetGen_Contains)(oset, &v) );
@@ -217,7 +217,7 @@
// Create a static OSet of Ints. This one uses fast (built-in)
// comparisons.
- OSet* oset = VG_(OSetWord_Create)(allocate_node, free_node);
+ OSet* oset = VG_(OSetWord_Create)(allocate_node, "oset_test.2", free_node);
// Try some operations on an empty OSet to ensure they don't screw up.
vg_assert( ! VG_(OSetWord_Contains)(oset, v) );
@@ -375,7 +375,7 @@
// comparisons.
OSet* oset = VG_(OSetGen_Create)(offsetof(Block, first),
blockCmp,
- allocate_node, free_node);
+ allocate_node, "oset_test.3", free_node);
// Try some operations on an empty OSet to ensure they don't screw up.
vg_assert( ! VG_(OSetGen_Contains)(oset, &v) );
|
|
From: <sv...@va...> - 2008-09-06 18:34:45
|
Author: sewardj
Date: 2008-09-06 19:34:50 +0100 (Sat, 06 Sep 2008)
New Revision: 8562
Log:
* libhb_core.c: add hashing to backtrace comparison for the
event-map's context-tree
* libhb_core.c: fix massive space leak in msm_handle_{read,write}
* add cost-center annotations to all allocation points
Modified:
branches/YARD/helgrind/hg_main.c
branches/YARD/helgrind/hg_wordset.c
branches/YARD/helgrind/hg_wordset.h
branches/YARD/helgrind/libhb.h
branches/YARD/helgrind/libhb_vg.c
Modified: branches/YARD/helgrind/hg_main.c
===================================================================
--- branches/YARD/helgrind/hg_main.c 2008-08-29 23:34:06 UTC (rev 8561)
+++ branches/YARD/helgrind/hg_main.c 2008-09-06 18:34:50 UTC (rev 8562)
@@ -198,10 +198,10 @@
/*--- Some very basic stuff ---*/
/*----------------------------------------------------------------*/
-static void* hg_zalloc ( SizeT n ) {
+static void* hg_zalloc ( HChar* cc, SizeT n ) {
void* p;
tl_assert(n > 0);
- p = VG_(malloc)( n );
+ p = VG_(malloc)( cc, n );
tl_assert(p);
VG_(memset)(p, 0, n);
return p;
@@ -357,7 +357,7 @@
static Thread* mk_Thread ( Thr* hbthr ) {
static Int indx = 1;
- Thread* thread = hg_zalloc( sizeof(Thread) );
+ Thread* thread = hg_zalloc( "hg.mk_Thread.1", sizeof(Thread) );
thread->locksetA = HG_(emptyWS)( univ_lsets );
thread->locksetW = HG_(emptyWS)( univ_lsets );
thread->magic = Thread_MAGIC;
@@ -373,7 +373,7 @@
// Make a new lock which is unlocked (hence ownerless)
static Lock* mk_LockN ( LockKind kind, Addr guestaddr ) {
static ULong unique = 0;
- Lock* lock = hg_zalloc( sizeof(Lock) );
+ Lock* lock = hg_zalloc( "hg.mk_Lock.1", sizeof(Lock) );
lock->admin = admin_locks;
lock->unique = unique++;
lock->magic = LockN_MAGIC;
@@ -500,7 +500,7 @@
tl_assert(lk->heldBy == NULL); /* can't w-lock recursively */
tl_assert(!lk->heldW);
lk->heldW = True;
- lk->heldBy = VG_(newBag)( hg_zalloc, hg_free );
+ lk->heldBy = VG_(newBag)( hg_zalloc, "hg.lNaw.1", hg_free );
VG_(addToBag)( lk->heldBy, (Word)thr );
break;
case LK_mbRec:
@@ -554,7 +554,7 @@
VG_(addToBag)(lk->heldBy, (Word)thr);
} else {
lk->heldW = False;
- lk->heldBy = VG_(newBag)( hg_zalloc, hg_free );
+ lk->heldBy = VG_(newBag)( hg_zalloc, "hg.lNar.1", hg_free );
VG_(addToBag)( lk->heldBy, (Word)thr );
}
tl_assert(!lk->heldW);
@@ -801,12 +801,13 @@
tl_assert(sizeof(Addr) == sizeof(Word));
tl_assert(map_threads == NULL);
- map_threads = hg_zalloc( VG_N_THREADS * sizeof(Thread*) );
+ map_threads = hg_zalloc( "hg.ids.1", VG_N_THREADS * sizeof(Thread*) );
tl_assert(map_threads != NULL);
tl_assert(sizeof(Addr) == sizeof(Word));
tl_assert(map_locks == NULL);
- map_locks = VG_(newFM)( hg_zalloc, hg_free, NULL/*unboxed Word cmp*/);
+ map_locks = VG_(newFM)( hg_zalloc, "hg.ids.2", hg_free,
+ NULL/*unboxed Word cmp*/);
tl_assert(map_locks != NULL);
__bus_lock_Lock = mk_LockN( LK_nonRec, (Addr)&__bus_lock );
@@ -814,15 +815,18 @@
VG_(addToFM)( map_locks, (Word)&__bus_lock, (Word)__bus_lock_Lock );
tl_assert(univ_tsets == NULL);
- univ_tsets = HG_(newWordSetU)( hg_zalloc, hg_free, 8/*cacheSize*/ );
+ univ_tsets = HG_(newWordSetU)( hg_zalloc, "hg.ids.3", hg_free,
+ 8/*cacheSize*/ );
tl_assert(univ_tsets != NULL);
tl_assert(univ_lsets == NULL);
- univ_lsets = HG_(newWordSetU)( hg_zalloc, hg_free, 8/*cacheSize*/ );
+ univ_lsets = HG_(newWordSetU)( hg_zalloc, "hg.ids.4", hg_free,
+ 8/*cacheSize*/ );
tl_assert(univ_lsets != NULL);
tl_assert(univ_laog == NULL);
- univ_laog = HG_(newWordSetU)( hg_zalloc, hg_free, 24/*cacheSize*/ );
+ univ_laog = HG_(newWordSetU)( hg_zalloc, "hg.ids.5 (univ_laog)",
+ hg_free, 24/*cacheSize*/ );
tl_assert(univ_laog != NULL);
/* Set up entries for the root thread */
@@ -1322,7 +1326,7 @@
HG_(cardinalityWS)( univ_lsets, lset_old), lk );
if (lk->appeared_at) {
if (ga_to_lastlock == NULL)
- ga_to_lastlock = VG_(newFM)( hg_zalloc, hg_free, NULL );
+ ga_to_lastlock = VG_(newFM)( hg_zalloc, "hg.rlll.1", hg_free, NULL );
VG_(addToFM)( ga_to_lastlock, ga_of_access, (Word)lk->appeared_at );
stats__ga_LL_adds++;
}
@@ -2339,7 +2343,7 @@
static void map_cond_to_SO_INIT ( void ) {
if (UNLIKELY(map_cond_to_SO == NULL)) {
- map_cond_to_SO = VG_(newFM)( hg_zalloc, hg_free, NULL );
+ map_cond_to_SO = VG_(newFM)( hg_zalloc, "hg.mctSI.1", hg_free, NULL );
tl_assert(map_cond_to_SO != NULL);
}
}
@@ -2672,7 +2676,8 @@
static void map_sem_to_SO_stack_INIT ( void ) {
if (map_sem_to_SO_stack == NULL) {
- map_sem_to_SO_stack = VG_(newFM)( hg_zalloc, hg_free, NULL );
+ map_sem_to_SO_stack = VG_(newFM)( hg_zalloc, "hg.mstSs.1",
+ hg_free, NULL );
tl_assert(map_sem_to_SO_stack != NULL);
}
}
@@ -2688,7 +2693,7 @@
tl_assert(xa);
VG_(addToXA)( xa, &so );
} else {
- xa = VG_(newXA)( hg_zalloc, hg_free, sizeof(SO*) );
+ xa = VG_(newXA)( hg_zalloc, "hg.pSfs.1", hg_free, sizeof(SO*) );
VG_(addToXA)( xa, &so );
VG_(addToFM)( map_sem_to_SO_stack, (Word)sem, (Word)xa );
}
@@ -2978,7 +2983,7 @@
presentF = outs_new == links->outs;
links->outs = outs_new;
} else {
- links = hg_zalloc(sizeof(LAOGLinks));
+ links = hg_zalloc("hg.lae.1", sizeof(LAOGLinks));
links->inns = HG_(emptyWS)( univ_laog );
links->outs = HG_(singletonWS)( univ_laog, (Word)dst );
VG_(addToFM)( laog, (Word)src, (Word)links );
@@ -2994,7 +2999,7 @@
presentR = inns_new == links->inns;
links->inns = inns_new;
} else {
- links = hg_zalloc(sizeof(LAOGLinks));
+ links = hg_zalloc("hg.lae.2", sizeof(LAOGLinks));
links->inns = HG_(singletonWS)( univ_laog, (Word)src );
links->outs = HG_(emptyWS)( univ_laog );
VG_(addToFM)( laog, (Word)dst, (Word)links );
@@ -3019,7 +3024,8 @@
if (VG_(lookupFM)( laog_exposition, NULL, NULL, (Word)&expo )) {
/* we already have it; do nothing */
} else {
- LAOGLinkExposition* expo2 = hg_zalloc(sizeof(LAOGLinkExposition));
+ LAOGLinkExposition* expo2 = hg_zalloc("hg.lae.3",
+ sizeof(LAOGLinkExposition));
expo2->src_ga = src->guestaddr;
expo2->dst_ga = dst->guestaddr;
expo2->src_ec = src->acquired_at;
@@ -3148,8 +3154,8 @@
return NULL;
ret = NULL;
- stack = VG_(newXA)( hg_zalloc, hg_free, sizeof(Lock*) );
- visited = VG_(newFM)( hg_zalloc, hg_free, NULL/*unboxedcmp*/ );
+ stack = VG_(newXA)( hg_zalloc, "hg.lddft.1", hg_free, sizeof(Lock*) );
+ visited = VG_(newFM)( hg_zalloc, "hg.lddft.2", hg_free, NULL/*unboxedcmp*/ );
(void) VG_(addToXA)( stack, &src );
@@ -3202,9 +3208,10 @@
return;
if (!laog)
- laog = VG_(newFM)( hg_zalloc, hg_free, NULL/*unboxedcmp*/ );
+ laog = VG_(newFM)( hg_zalloc, "hg.lptal.1",
+ hg_free, NULL/*unboxedcmp*/ );
if (!laog_exposition)
- laog_exposition = VG_(newFM)( hg_zalloc, hg_free,
+ laog_exposition = VG_(newFM)( hg_zalloc, "hg.lptal.2", hg_free,
cmp_LAOGLinkExposition );
/* First, the check. Complain if there is any path in laog from lk
@@ -3310,9 +3317,9 @@
UWord* ws_words;
if (!laog)
- laog = VG_(newFM)( hg_zalloc, hg_free, NULL/*unboxedcmp*/ );
+ laog = VG_(newFM)( hg_zalloc, "hg.lhld.1", hg_free, NULL/*unboxedcmp*/ );
if (!laog_exposition)
- laog_exposition = VG_(newFM)( hg_zalloc, hg_free,
+ laog_exposition = VG_(newFM)( hg_zalloc, "hg.lhld.2", hg_free,
cmp_LAOGLinkExposition );
HG_(getPayloadWS)( &ws_words, &ws_size, univ_lsets, locksToDelete );
@@ -3344,7 +3351,7 @@
static MallocMeta* new_MallocMeta ( void ) {
- MallocMeta* md = hg_zalloc( sizeof(MallocMeta) );
+ MallocMeta* md = hg_zalloc( "hg.new_MallocMeta.1", sizeof(MallocMeta) );
tl_assert(md);
return md;
}
@@ -3806,7 +3813,8 @@
static void map_pthread_t_to_Thread_INIT ( void ) {
if (UNLIKELY(map_pthread_t_to_Thread == NULL)) {
- map_pthread_t_to_Thread = VG_(newFM)( hg_zalloc, hg_free, NULL );
+ map_pthread_t_to_Thread = VG_(newFM)( hg_zalloc, "hg.mpttT.1",
+ hg_free, NULL );
tl_assert(map_pthread_t_to_Thread != NULL);
}
}
@@ -4051,7 +4059,8 @@
if (!str)
str = "(null)";
if (!string_table) {
- string_table = VG_(newFM)( hg_zalloc, hg_free, string_table_cmp );
+ string_table = VG_(newFM)( hg_zalloc, "hg.sts.1",
+ hg_free, string_table_cmp );
tl_assert(string_table);
}
if (VG_(lookupFM)( string_table,
@@ -4060,7 +4069,7 @@
if (0) VG_(printf)("string_table_strdup: %p -> %p\n", str, copy );
return copy;
} else {
- copy = VG_(strdup)(str);
+ copy = VG_(strdup)("hg.sts.2", str);
tl_assert(copy);
VG_(addToFM)( string_table, (Word)copy, (Word)copy );
return copy;
@@ -4086,11 +4095,11 @@
stats__ga_LockN_to_P_queries++;
tl_assert( is_sane_LockN(lkn) );
if (!yaWFM) {
- yaWFM = VG_(newFM)( hg_zalloc, hg_free, lock_unique_cmp );
+ yaWFM = VG_(newFM)( hg_zalloc, "hg.mLPfLN.1", hg_free, lock_unique_cmp );
tl_assert(yaWFM);
}
if (!VG_(lookupFM)( yaWFM, NULL, (Word*)&lkp, (Word)lkn)) {
- lkp = hg_zalloc( sizeof(Lock) );
+ lkp = hg_zalloc( "hg.mLPfLN.2", sizeof(Lock) );
*lkp = *lkn;
lkp->admin = NULL;
lkp->magic = LockP_MAGIC;
@@ -4452,7 +4461,7 @@
XArray* xa;
UWord* ts_words;
UWord ts_size, i;
- xa = VG_(newXA)( hg_zalloc, hg_free, sizeof(Thread*) );
+ xa = VG_(newXA)( hg_zalloc, "hg.cTbei.1", hg_free, sizeof(Thread*) );
tl_assert(xa);
HG_(getPayloadWS)( &ts_words, &ts_size, univ_tsets, tset );
tl_assert(ts_words);
Modified: branches/YARD/helgrind/hg_wordset.c
===================================================================
--- branches/YARD/helgrind/hg_wordset.c 2008-08-29 23:34:06 UTC (rev 8561)
+++ branches/YARD/helgrind/hg_wordset.c 2008-09-06 18:34:50 UTC (rev 8562)
@@ -140,7 +140,8 @@
corresponding ix2vec entry number. The two mappings are mutually
redundant. */
struct _WordSetU {
- void* (*alloc)(SizeT);
+ void* (*alloc)(HChar*,SizeT);
+ HChar* cc;
void (*dealloc)(void*);
WordFM* vec2ix; /* WordVec-to-WordSet mapping tree */
WordVec** ix2vec; /* WordSet-to-WordVec mapping array */
@@ -176,12 +177,12 @@
{
WordVec* wv;
tl_assert(sz >= 0);
- wv = wsu->alloc( sizeof(WordVec) );
+ wv = wsu->alloc( wsu->cc, sizeof(WordVec) );
wv->owner = wsu;
wv->words = NULL;
wv->size = sz;
if (sz > 0) {
- wv->words = wsu->alloc( (SizeT)sz * sizeof(UWord) );
+ wv->words = wsu->alloc( wsu->cc, (SizeT)sz * sizeof(UWord) );
}
return wv;
}
@@ -238,7 +239,7 @@
return;
new_sz = 2 * wsu->ix2vec_size;
if (new_sz == 0) new_sz = 2;
- new_vec = wsu->alloc( new_sz * sizeof(WordVec*) );
+ new_vec = wsu->alloc( wsu->cc, new_sz * sizeof(WordVec*) );
tl_assert(new_vec);
for (i = 0; i < wsu->ix2vec_size; i++)
new_vec[i] = wsu->ix2vec[i];
@@ -305,18 +306,21 @@
}
-WordSetU* HG_(newWordSetU) ( void* (*alloc_nofail)( SizeT ),
+WordSetU* HG_(newWordSetU) ( void* (*alloc_nofail)( HChar*, SizeT ),
+ HChar* cc,
void (*dealloc)(void*),
Word cacheSize )
{
WordSetU* wsu;
WordVec* empty;
- wsu = alloc_nofail( sizeof(WordSetU) );
+ wsu = alloc_nofail( cc, sizeof(WordSetU) );
VG_(memset)( wsu, 0, sizeof(WordSetU) );
wsu->alloc = alloc_nofail;
+ wsu->cc = cc;
wsu->dealloc = dealloc;
- wsu->vec2ix = VG_(newFM)( alloc_nofail, dealloc, cmp_WordVecs_for_FM );
+ wsu->vec2ix = VG_(newFM)( alloc_nofail, cc,
+ dealloc, cmp_WordVecs_for_FM );
wsu->ix2vec_used = 0;
wsu->ix2vec_size = 0;
wsu->ix2vec = NULL;
Modified: branches/YARD/helgrind/hg_wordset.h
===================================================================
--- branches/YARD/helgrind/hg_wordset.h 2008-08-29 23:34:06 UTC (rev 8561)
+++ branches/YARD/helgrind/hg_wordset.h 2008-09-06 18:34:50 UTC (rev 8562)
@@ -47,7 +47,8 @@
typedef UInt WordSet; /* opaque, small int index */
/* Allocate and initialise a WordSetU */
-WordSetU* HG_(newWordSetU) ( void* (*alloc_nofail)( SizeT ),
+WordSetU* HG_(newWordSetU) ( void* (*alloc_nofail)( HChar*, SizeT ),
+ HChar* cc,
void (*dealloc)(void*),
Word cacheSize );
Modified: branches/YARD/helgrind/libhb.h
===================================================================
--- branches/YARD/helgrind/libhb.h 2008-08-29 23:34:06 UTC (rev 8561)
+++ branches/YARD/helgrind/libhb.h 2008-09-06 18:34:50 UTC (rev 8562)
@@ -52,7 +52,7 @@
'shadow_alloc' should never return NULL, instead they should simply
not return if they encounter an out-of-memory condition. */
Thr* libhb_init (
- void* (*zalloc)( SizeT ),
+ void* (*zalloc)( HChar*, SizeT ),
void (*dealloc)( void* ),
void* (*shadow_alloc)( SizeT ),
void (*get_stacktrace)( Thr*, Addr*, UWord ),
Modified: branches/YARD/helgrind/libhb_vg.c
===================================================================
--- branches/YARD/helgrind/libhb_vg.c 2008-08-29 23:34:06 UTC (rev 8561)
+++ branches/YARD/helgrind/libhb_vg.c 2008-09-06 18:34:50 UTC (rev 8562)
@@ -60,8 +60,8 @@
#define libhbPlainVG_OSetGen_Insert(_arg1, _arg2) \
vgPlain_OSetGen_Insert((_arg1),(_arg2))
-#define libhbPlainVG_OSetGen_Create(_arg1, _arg2, _arg3, _arg4) \
- vgPlain_OSetGen_Create((_arg1),(_arg2),(_arg3),(_arg4))
+#define libhbPlainVG_OSetGen_Create(_arg1, _arg2, _arg3, _arg4, _arg5) \
+ vgPlain_OSetGen_Create((_arg1),(_arg2),(_arg3),(_arg4),(_arg5))
#define libhbPlainVG_OSetGen_Size(_arg1) \
vgPlain_OSetGen_Size((_arg1))
|
|
From: Tom H. <th...@cy...> - 2008-09-06 03:10:11
|
Nightly build on alvis ( i686, Red Hat 7.3 ) started at 2008-09-06 03:15:02 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 346 tests, 60 stderr failures, 1 stdout failure, 29 post failures == memcheck/tests/file_locking (stderr) memcheck/tests/leak-0 (stderr) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-regroot (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/long_namespace_xml (stderr) memcheck/tests/malloc_free_fill (stderr) memcheck/tests/origin1-yes (stderr) memcheck/tests/origin4-many (stderr) memcheck/tests/origin5-bz2 (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_changes (stderr) memcheck/tests/varinfo1 (stderr) memcheck/tests/varinfo2 (stderr) memcheck/tests/varinfo3 (stderr) memcheck/tests/varinfo4 (stderr) memcheck/tests/varinfo5 (stderr) memcheck/tests/varinfo6 (stderr) memcheck/tests/x86/bug152022 (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) memcheck/tests/x86/xor-undef-x86 (stderr) memcheck/tests/xml1 (stderr) massif/tests/alloc-fns-A (post) massif/tests/alloc-fns-B (post) massif/tests/basic (post) massif/tests/basic2 (post) massif/tests/big-alloc (post) massif/tests/culling1 (stderr) massif/tests/culling2 (stderr) massif/tests/custom_alloc (post) massif/tests/deep-A (post) massif/tests/deep-B (stderr) massif/tests/deep-B (post) massif/tests/deep-C (stderr) massif/tests/deep-C (post) massif/tests/deep-D (post) massif/tests/ignoring (post) massif/tests/insig (post) massif/tests/long-names (post) massif/tests/long-time (post) massif/tests/new-cpp (post) massif/tests/null (post) massif/tests/one (post) massif/tests/overloaded-new (post) massif/tests/peak (post) massif/tests/peak2 (stderr) massif/tests/peak2 (post) massif/tests/realloc (stderr) massif/tests/realloc (post) massif/tests/thresholds_0_0 (post) massif/tests/thresholds_0_10 (post) massif/tests/thresholds_10_0 (post) massif/tests/thresholds_10_10 (post) massif/tests/thresholds_5_0 (post) massif/tests/thresholds_5_10 (post) massif/tests/zero1 (post) massif/tests/zero2 (post) none/tests/blockfault (stderr) none/tests/mremap2 (stdout) none/tests/shell (stderr) none/tests/shell_valid1 (stderr) none/tests/shell_valid2 (stderr) none/tests/shell_valid3 (stderr) helgrind/tests/hg01_all_ok (stderr) helgrind/tests/hg02_deadlock (stderr) helgrind/tests/hg03_inherit (stderr) helgrind/tests/hg04_race (stderr) helgrind/tests/hg05_race2 (stderr) helgrind/tests/hg06_readshared (stderr) helgrind/tests/tc01_simple_race (stderr) helgrind/tests/tc02_simple_tls (stderr) helgrind/tests/tc03_re_excl (stderr) helgrind/tests/tc05_simple_race (stderr) helgrind/tests/tc06_two_races (stderr) helgrind/tests/tc07_hbl1 (stderr) helgrind/tests/tc08_hbl2 (stderr) helgrind/tests/tc09_bad_unlock (stderr) helgrind/tests/tc11_XCHG (stderr) helgrind/tests/tc12_rwl_trivial (stderr) helgrind/tests/tc14_laog_dinphils (stderr) helgrind/tests/tc16_byterace (stderr) helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc19_shadowmem (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) helgrind/tests/tc23_bogus_condwait (stderr) helgrind/tests/tc24_nonzero_sem (stderr) |
|
From: Tom H. <th...@cy...> - 2008-09-06 02:54:22
|
Nightly build on aston ( x86_64, Fedora Core 5 ) started at 2008-09-06 03:20:05 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 444 tests, 8 stderr failures, 1 stdout failure, 0 post failures == memcheck/tests/file_locking (stderr) memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/x86/scalar (stderr) none/tests/blockfault (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) |
|
From: Tom H. <th...@cy...> - 2008-09-06 02:44:36
|
Nightly build on lloyd ( x86_64, Fedora 7 ) started at 2008-09-06 03:05:04 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 438 tests, 6 stderr failures, 2 stdout failures, 0 post failures == memcheck/tests/file_locking (stderr) memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/scalar (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc22_exit_w_lock (stderr) |
|
From: Tom H. <th...@cy...> - 2008-09-06 02:41:54
|
Nightly build on trojan ( x86_64, Fedora Core 6 ) started at 2008-09-06 03:25:05 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 442 tests, 9 stderr failures, 5 stdout failures, 0 post failures == memcheck/tests/file_locking (stderr) memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/bug133694 (stdout) memcheck/tests/x86/bug133694 (stderr) memcheck/tests/x86/scalar (stderr) none/tests/cmdline1 (stdout) none/tests/cmdline2 (stdout) none/tests/mremap2 (stdout) helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) |
|
From: Tom H. <th...@cy...> - 2008-09-06 02:23:39
|
Nightly build on gill ( x86_64, Fedora Core 2 ) started at 2008-09-06 03:00:02 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 444 tests, 31 stderr failures, 3 stdout failures, 0 post failures == memcheck/tests/file_locking (stderr) memcheck/tests/malloc_free_fill (stderr) memcheck/tests/origin5-bz2 (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/varinfo6 (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) none/tests/amd64/insn_ssse3 (stdout) none/tests/amd64/insn_ssse3 (stderr) none/tests/amd64/ssse3_misaligned (stderr) none/tests/blockfault (stderr) none/tests/fdleak_fcntl (stderr) none/tests/mremap2 (stdout) none/tests/x86/insn_ssse3 (stdout) none/tests/x86/insn_ssse3 (stderr) none/tests/x86/ssse3_misaligned (stderr) helgrind/tests/hg01_all_ok (stderr) helgrind/tests/hg02_deadlock (stderr) helgrind/tests/hg03_inherit (stderr) helgrind/tests/hg04_race (stderr) helgrind/tests/hg05_race2 (stderr) helgrind/tests/tc01_simple_race (stderr) helgrind/tests/tc05_simple_race (stderr) helgrind/tests/tc06_two_races (stderr) helgrind/tests/tc09_bad_unlock (stderr) helgrind/tests/tc14_laog_dinphils (stderr) helgrind/tests/tc16_byterace (stderr) helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc19_shadowmem (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) helgrind/tests/tc23_bogus_condwait (stderr) |