You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
|
|
1
(21) |
|
2
(19) |
3
(33) |
4
(24) |
5
(18) |
6
(13) |
7
(22) |
8
(21) |
|
9
(38) |
10
(25) |
11
(20) |
12
(27) |
13
(43) |
14
(9) |
15
(19) |
|
16
(37) |
17
(19) |
18
(13) |
19
(11) |
20
(8) |
21
(11) |
22
(25) |
|
23
(21) |
24
(30) |
25
(18) |
26
(11) |
27
(10) |
28
(14) |
29
(40) |
|
30
(24) |
31
(14) |
|
|
|
|
|
|
From: <sv...@va...> - 2008-03-14 23:45:11
|
Author: sewardj
Date: 2008-03-14 23:45:11 +0000 (Fri, 14 Mar 2008)
New Revision: 7679
Log:
Performance improvements to do with Segment Set updates, from
Konstantin Serebryany:
* take advantage of the fact that (properly formed) segment sets are
max-frontiers w.r.t. the happens-before relation. This allows
skipping 50%-95% of all updates.
* if that doesn't avoid the work, at least build new segment sets in
a way which reduces the number of intermediates that need to
be constructed.
* use a different scheme of allocating memory in addToWS and
delFromWS. Cuts total number of calls to hg_zalloc by about 20%.
* limit the size of segment sets to 20 by default; controllable by
--max-segment-set-size= command line parameter.
* more statistics gathering
* a bit of harmless debug stuff in *mu_is_cv*
Modified:
branches/HGDEV/helgrind/hg_main.c
branches/HGDEV/helgrind/hg_wordset.c
Modified: branches/HGDEV/helgrind/hg_main.c
===================================================================
--- branches/HGDEV/helgrind/hg_main.c 2008-03-14 17:07:51 UTC (rev 7678)
+++ branches/HGDEV/helgrind/hg_main.c 2008-03-14 23:45:11 UTC (rev 7679)
@@ -209,6 +209,14 @@
static UWord clo_ignore_n = 1;
static UWord clo_ignore_i = 0;
+
+/* Max number of segments in a segment set.
+ The default value is large enough, but allows to stay sane.
+ Must be >= 4.
+ */
+static UInt clo_max_segment_set_size = 20;
+
+
/* This has to do with printing error messages. See comments on
announce_threadset() and summarise_threadset(). Perhaps it
should be a command line option. */
@@ -260,17 +268,22 @@
/*--- Some very basic stuff ---*/
/*----------------------------------------------------------------*/
+static UWord stat__hg_zalloc = 0;
+static UWord stat__hg_free = 0;
+
static void* hg_zalloc ( SizeT n ) {
void* p;
tl_assert(n > 0);
p = VG_(malloc)( n );
tl_assert(p);
VG_(memset)(p, 0, n);
+ stat__hg_zalloc++;
return p;
}
static void hg_free ( void* p ) {
tl_assert(p);
VG_(free)(p);
+ stat__hg_free++;
}
/* Round a up to the next multiple of N. N must be a power of 2 */
@@ -2121,13 +2134,17 @@
*/
static WordFM *mu_is_cv_map = NULL;
-static void set_mu_is_cv(Word mu)
+static void set_mu_is_cv(Word mu, ThreadId tid)
{
+ ExeContext *context = NULL;
+ // context = VG_(record_ExeContext(tid, -1/*first_ip_delta*/));
if (!mu_is_cv_map) {
mu_is_cv_map = HG_(newFM) (hg_zalloc, hg_free, NULL);
}
- HG_(addToFM)(mu_is_cv_map, mu, mu);
-// VG_(printf)("mu is cv: %p\n", mu);
+ HG_(addToFM)(mu_is_cv_map, mu, (Word)context);
+ // HG_(addToFM)(mu_is_cv_map, mu, (Word)context);
+// VG_(printf)("set_mu_is_cv: %p\n", mu);
+// VG_(get_and_pp_StackTrace)( tid, 15);
}
static void unset_mu_is_cv(Word mu)
@@ -2139,8 +2156,17 @@
static Bool mu_is_cv(Word mu)
{
- return mu_is_cv_map != NULL
- && HG_(lookupFM)(mu_is_cv_map, NULL, NULL, mu);
+ ExeContext *context;
+ Word w;
+ Bool res = mu_is_cv_map != NULL
+ && HG_(lookupFM)(mu_is_cv_map, &w, (Word*)&context, mu);
+
+ if (res && context) {
+ VG_(printf)("mu_is_cv: ");
+ VG_(pp_ExeContext)(context);
+ }
+
+ return res;
}
@@ -3076,6 +3102,9 @@
static UWord stats__msm_New_to_R = 0;
static UWord stats__msm_oldSS_single = 0;
static UWord stats__msm_oldSS_multi = 0;
+static UWord stats__msm_oldSS_multi_shortcut = 0;
+static UWord stats__msm_oldSS_multi_add = 0;
+static UWord stats__msm_oldSS_multi_del = 0;
/* fwds */
static void record_error_Race ( Thread* thr,
@@ -3318,17 +3347,37 @@
/* General case */
UWord i;
- UWord oldSS_size = 0;
+ UWord oldSS_size = SS_get_size(oldSS);
SegmentSet newSS = 0;
+ SegmentID add_vec[oldSS_size+1]; // C99 array.
+ SegmentID del_vec[oldSS_size+1]; // C99 array.
+ UInt add_size = 0, del_size = 0;
- oldSS_size = SS_get_size(oldSS);
tl_assert(oldSS_size > 1);
stats__msm_oldSS_multi++;
*hb_all_p = True;
- newSS = SS_mk_singleton(currS);
+
+ tl_assert(oldSS_size <= clo_max_segment_set_size);
+
+ // fill in the arrays add_vec/del_vec and try a shortcut
+ add_vec[add_size++] = currS;
for (i = 0; i < oldSS_size; i++) {
SegmentID S = SS_get_element(oldSS, i);
+ if (currS == S) {
+ // shortcut:
+ // currS is already contained in oldSS, so we don't need to add it.
+ // Since oldSS is a max frontier
+ // (i.e. for each two different segments S1 and S2 from oldSS
+ // neither HB(S1,S2) nor HB(S2,S1))
+ // we don't need to remove anything.
+ // So, return oldSS unchanged.
+ stats__msm_oldSS_multi_shortcut++;
+ // none of the segments in SS happend-before currS
+ *hb_all_p = False;
+ return oldSS;
+ }
+ // compute happens-before
Bool hb = False;
if (S == currS // Same segment.
|| SEG_get(S)->thr == thr // Same thread.
@@ -3336,21 +3385,69 @@
// different thread, but happens-before
hb = True;
}
+ // trace
if (do_trace) {
VG_(printf)("HB(S%d/T%d,cur)=%d\n",
S, SEG_get(S)->thr->errmsg_index, hb);
}
-
+ // fill in add_vec or del_vec
if (!hb) {
*hb_all_p = False;
- // Not happened-before. Leave this segment in SS.
- if (SS_is_singleton(newSS)) {
- tl_assert(currS != S);
- newSS = HG_(doubletonWS)(univ_ssets, currS, S);
+ add_vec[add_size++] = S;
+ } else {
+ del_vec[del_size++] = S;
+ }
+ }
+
+ // check if we've got a singleton.
+ if (add_size == 1) {
+ // add_size == 1 means that currS happend-after all segments in oldSS
+ tl_assert(*hb_all_p == True && del_size == oldSS_size);
+ return SS_mk_singleton(currS);
+ }
+ tl_assert(add_size >= 2);
+
+ // we couldn't have added more than one segment to the set.
+ tl_assert(add_size <= clo_max_segment_set_size+1);
+
+ if (add_size == clo_max_segment_set_size + 1) {
+ // we've hit the limit of SS size and can't add one more segment.
+ add_size--;
+ tl_assert(del_size == 0);
+ // we will remove the first segment from the old set
+ // (this segment is likely the oldest one)
+ del_vec[del_size++] = add_vec[1];
+
+ // now del_size==1 and add_size >= 4 (clo_max_segment_set_size >= 4)
+ // so we are guaranteed to go to 'del' path below.
+ }
+
+ if (add_size - 1 <= del_size + 1) {
+ tl_assert(add_size <= clo_max_segment_set_size);
+ // create new segment set by adding segments to an empty set.
+ // Requires add_size-1 set operations.
+ for (i = 1; i < add_size; i++) {
+ SegmentID S = add_vec[i];
+ if (i == 1) {
+ newSS = HG_(doubletonWS)(univ_ssets, add_vec[0], S);
} else {
newSS = HG_(addToWS)(univ_ssets, newSS, S);
}
}
+ stats__msm_oldSS_multi_add++;
+ } else {
+ tl_assert(oldSS_size < clo_max_segment_set_size || del_size > 0);
+ // create new segment by removing segments from oldSS
+ // and then adding curS.
+ // Requires del_size+1 set operations.
+ newSS = oldSS;
+ for (i = 0; i < del_size; i++) {
+ newSS = HG_(delFromWS)(univ_ssets, newSS, del_vec[i]);
+ }
+ newSS = HG_(addToWS)(univ_ssets, newSS, currS);
+
+ tl_assert(SS_get_size(newSS) == add_size);
+ stats__msm_oldSS_multi_del++;
}
return newSS;
}
@@ -8305,7 +8402,7 @@
break;
case VG_USERREQ__HG_MUTEX_IS_USED_AS_CONDVAR: // void *
- set_mu_is_cv(args[1]);
+ set_mu_is_cv(args[1], tid);
break;
// These two client requests are useful to mark a section of code
@@ -9199,6 +9296,10 @@
tl_assert(clo_ignore_n == 0 || (clo_ignore_n > 0
&& clo_ignore_i < clo_ignore_n));
}
+ else if (VG_CLO_STREQN(23, arg, "--max-segment-set-size=")) {
+ clo_max_segment_set_size = VG_(atoll)(&arg[23]);
+ tl_assert(clo_max_segment_set_size >= 4);
+ }
else if (VG_CLO_STREQ(arg, "--gen-vcg=no"))
clo_gen_vcg = 0;
@@ -9323,6 +9424,11 @@
}
VG_(printf)("\n");
+ VG_(printf)(" zalloc/free: %,10lu %,10lu\n",
+ stat__hg_zalloc, stat__hg_free);
+
+
+ VG_(printf)("\n");
VG_(printf)(" hbefore: %,10lu queries\n", stats__hbefore_queries);
VG_(printf)(" hbefore: %,10lu hash table hits\n", stats__hbefore_hits);
VG_(printf)(" hbefore: %,10lu graph searches\n", stats__hbefore_gsearches);
@@ -9365,16 +9471,20 @@
VG_(printf)(" sanity checks: %,8lu\n", stats__sanity_checks);
VG_(printf)("\n");
- VG_(printf)(" msm: %,12lu %,12lu BHL-skipped, Race\n",
+ VG_(printf)(" msm: %,14lu %,14lu BHL-skipped, Race\n",
stats__msm_BHL_hack, stats__msm_Race);
- VG_(printf)(" msm: %,12lu %,12lu R_to_R, R_to_W\n",
+ VG_(printf)(" msm: %,14lu %,14lu R_to_R, R_to_W\n",
stats__msm_R_to_R, stats__msm_R_to_W);
- VG_(printf)(" msm: %,12lu %,12lu W_to_R, W_to_W\n",
+ VG_(printf)(" msm: %,14lu %,14lu W_to_R, W_to_W\n",
stats__msm_W_to_R, stats__msm_W_to_W);
- VG_(printf)(" msm: %,12lu %,12lu New_to_R, New_to_W\n",
+ VG_(printf)(" msm: %,14lu %,14lu New_to_R, New_to_W\n",
stats__msm_New_to_R, stats__msm_New_to_W);
- VG_(printf)(" msm: %,12lu %,12lu oldSS_single, oldSS_multi\n",
- stats__msm_oldSS_single, stats__msm_oldSS_multi);
+ VG_(printf)(" msm: %,14lu SS_update_single\n",
+ stats__msm_oldSS_single);
+ VG_(printf)(" msm: %,14lu %,14lu SS_update_multi, shortcut\n",
+ stats__msm_oldSS_multi, stats__msm_oldSS_multi_shortcut);
+ VG_(printf)(" msm: %,14lu %,14lu SS_update_add, SS_update_del\n",
+ stats__msm_oldSS_multi_add, stats__msm_oldSS_multi_del);
VG_(printf)("\n");
VG_(printf)(" secmaps: %,10lu allocd (%,12lu g-a-range)\n",
@@ -9392,13 +9502,13 @@
VG_(printf)("\n");
VG_(printf)(" cache: %,lu totrefs (%,lu misses)\n",
stats__cache_totrefs, stats__cache_totmisses );
- VG_(printf)(" cache: %,12lu Z-fetch, %,12lu F-fetch\n",
+ VG_(printf)(" cache: %,14lu Z-fetch, %,14lu F-fetch\n",
stats__cache_Z_fetches, stats__cache_F_fetches );
- VG_(printf)(" cache: %,12lu Z-wback, %,12lu F-wback\n",
+ VG_(printf)(" cache: %,14lu Z-wback, %,14lu F-wback\n",
stats__cache_Z_wbacks, stats__cache_F_wbacks );
- VG_(printf)(" cache: %,12lu invals, %,12lu flushes\n",
+ VG_(printf)(" cache: %,14lu invals, %,14lu flushes\n",
stats__cache_invals, stats__cache_flushes );
- VG_(printf)(" cache: %,12llu arange New, %,12llu direct-to-Zreps\n",
+ VG_(printf)(" cache: %,14llu arange_New %,14llu direct-to-Zreps\n",
stats__cache_make_New_arange,
stats__cache_make_New_inZrep);
Modified: branches/HGDEV/helgrind/hg_wordset.c
===================================================================
--- branches/HGDEV/helgrind/hg_wordset.c 2008-03-14 17:07:51 UTC (rev 7678)
+++ branches/HGDEV/helgrind/hg_wordset.c 2008-03-14 23:45:11 UTC (rev 7679)
@@ -305,7 +305,47 @@
}
}
+// Similar to add_or_dealloc_WordVec, but different allocation policy:
+// If wv_new is already in wsu, just return it's index and do not deallocate.
+// Else allocate new WordVec, and copy wv_new there.
+static WordSet find_or_alloc_and_add_WordVec( WordSetU* wsu, WordVec* wv_new )
+{
+ Bool have;
+ WordVec* wv_old;
+ UWord/*Set*/ ix_old = -1;
+ tl_assert(wv_new->owner == wsu);
+ have = HG_(lookupFM)( wsu->vec2ix,
+ (Word*)&wv_old, (Word*)&ix_old,
+ (Word)wv_new );
+ if (have) {
+ tl_assert(wv_old != wv_new);
+ tl_assert(wv_old);
+ tl_assert(wv_old->owner == wsu);
+ tl_assert(ix_old < wsu->ix2vec_used);
+ tl_assert(wsu->ix2vec[ix_old] == wv_old);
+ return (WordSet)ix_old;
+ } else {
+ // allocate new WordVec and copy contents from wv_new.
+ WordVec *wv_tmp = new_WV_of_size( wsu, wv_new->size );
+ UInt i;
+ for (i = 0; i < wv_new->size; i++) {
+ wv_tmp->words[i] = wv_new->words[i];
+ }
+ wv_new = wv_tmp;
+ ensure_ix2vec_space( wsu );
+ tl_assert(wsu->ix2vec);
+ tl_assert(wsu->ix2vec_used < wsu->ix2vec_size);
+ wsu->ix2vec[wsu->ix2vec_used] = wv_new;
+ HG_(addToFM)( wsu->vec2ix, (Word)wv_new, (Word)wsu->ix2vec_used );
+ if (0) VG_(printf)("aodW %d\n", (Int)wsu->ix2vec_used );
+ wsu->ix2vec_used++;
+ tl_assert(wsu->ix2vec_used <= wsu->ix2vec_size);
+ return (WordSet)(wsu->ix2vec_used - 1);
+ }
+}
+
+
WordSetU* HG_(newWordSetU) ( void* (*alloc_nofail)( SizeT ),
void (*dealloc)(void*),
Word cacheSize )
@@ -494,10 +534,15 @@
void HG_(ppWSUstats) ( WordSetU* wsu, HChar* name )
{
+ int i;
+ int d_size = 10;
+ int size_distribution[10] = {0};
+
+
VG_(printf)(" WordSet \"%s\":\n", name);
- VG_(printf)(" addTo %10u (%u uncached)\n",
+ VG_(printf)(" addTo %,10u (%,u uncached)\n",
wsu->n_add, wsu->n_add_uncached);
- VG_(printf)(" delFrom %10u (%u uncached)\n",
+ VG_(printf)(" delFrom %,10u (%,u uncached)\n",
wsu->n_del, wsu->n_del_uncached);
VG_(printf)(" union %10u\n", wsu->n_union);
VG_(printf)(" intersect %10u (%u uncached) [nb. incl isSubsetOf]\n",
@@ -511,13 +556,32 @@
VG_(printf)(" anyElementOf %10u\n", wsu->n_anyElementOf);
VG_(printf)(" elementOf %10u\n", wsu->n_elementOf);
VG_(printf)(" isSubsetOf %10u\n", wsu->n_isSubsetOf);
+
+
+ // compute and print size distributions
+ for (i = 0; i < (int)HG_(cardinalityWSU)(wsu); i++) {
+ WordVec *wv = do_ix2vec( wsu, i );
+ int size = wv->size;
+ if (size >= d_size) size = d_size-1;
+ size_distribution[size]++;
+ }
+ tl_assert(size_distribution[0] == 1);
+ for (i = 1; i < d_size; i++) {
+ if (size_distribution[i] == 0) continue;
+ if(i == d_size-1)
+ VG_(printf)(" size[>=%d] %10d\n", i, size_distribution[i]);
+ else
+ VG_(printf)(" size[%d] %10d\n", i, size_distribution[i]);
+ }
}
WordSet HG_(addToWS) ( WordSetU* wsu, WordSet ws, UWord w )
{
UWord k, j;
- WordVec* wv_new;
- WordVec* wv;
+ WordVec* wv = do_ix2vec( wsu, ws ); ;
+ UWord wv_new_arr[wv->size+1]; // C99 array.
+ WordVec wv_new_ = {wsu, wv_new_arr, wv->size+1};
+ WordVec *wv_new = &wv_new_;
WordSet result = (WordSet)(-1); /* bogus */
wsu->n_add++;
@@ -525,7 +589,6 @@
wsu->n_add_uncached++;
/* If already present, this is a no-op. */
- wv = do_ix2vec( wsu, ws );
for (k = 0; k < wv->size; k++) {
if (wv->words[k] == w) {
result = ws;
@@ -533,7 +596,6 @@
}
}
/* Ok, not present. Build a new one ... */
- wv_new = new_WV_of_size( wsu, wv->size + 1 );
k = j = 0;
for (; k < wv->size && wv->words[k] < w; k++) {
wv_new->words[j++] = wv->words[k];
@@ -546,7 +608,7 @@
tl_assert(j == wv_new->size);
/* Find any existing copy, or add the new one. */
- result = add_or_dealloc_WordVec( wsu, wv_new );
+ result = find_or_alloc_and_add_WordVec(wsu, wv_new);
tl_assert(result != (WordSet)(-1));
out:
@@ -554,12 +616,15 @@
return result;
}
+
WordSet HG_(delFromWS) ( WordSetU* wsu, WordSet ws, UWord w )
{
UWord i, j, k;
- WordVec* wv_new;
WordSet result = (WordSet)(-1); /* bogus */
WordVec* wv = do_ix2vec( wsu, ws );
+ UWord wv_new_arr[wv->size-1]; // C99 array.
+ WordVec wv_new_ = {wsu, wv_new_arr, wv->size-1};
+ WordVec *wv_new = &wv_new_;
wsu->n_del++;
@@ -586,7 +651,6 @@
tl_assert(i >= 0 && i < wv->size);
tl_assert(wv->size > 0);
- wv_new = new_WV_of_size( wsu, wv->size - 1 );
j = k = 0;
for (; j < wv->size; j++) {
if (j == i)
@@ -595,7 +659,7 @@
}
tl_assert(k == wv_new->size);
- result = add_or_dealloc_WordVec( wsu, wv_new );
+ result = find_or_alloc_and_add_WordVec(wsu, wv_new);
if (wv->size == 1) {
tl_assert(result == wsu->empty);
}
@@ -854,3 +918,4 @@
/*--------------------------------------------------------------------*/
/*--- end hg_wordset.c ---*/
/*--------------------------------------------------------------------*/
+// vim:shiftwidth=3:softtabstop=3:expandtab
|
|
From: Doug E. <dj...@go...> - 2008-03-14 23:35:23
|
Hi. I've found a couple of references to work on adding a gdb stub to valgrind but I don't see anything in the repository. What's the state of this work and is there any source I can play with. [I've hacked on gdb and written gdb stubs in the past and am interested in helping out.] |
|
From: <sv...@va...> - 2008-03-14 17:07:51
|
Author: bart
Date: 2008-03-14 17:07:51 +0000 (Fri, 14 Mar 2008)
New Revision: 7678
Log:
Even more optimizations.
Modified:
trunk/exp-drd/drd_bitmap.c
trunk/exp-drd/drd_bitmap.h
trunk/exp-drd/drd_main.c
trunk/exp-drd/drd_thread.h
trunk/exp-drd/pub_drd_bitmap.h
Modified: trunk/exp-drd/drd_bitmap.c
===================================================================
--- trunk/exp-drd/drd_bitmap.c 2008-03-13 20:33:29 UTC (rev 7677)
+++ trunk/exp-drd/drd_bitmap.c 2008-03-14 17:07:51 UTC (rev 7678)
@@ -77,7 +77,7 @@
* Record an access of type access_type at addresses a .. a + size - 1 in
* bitmap bm.
*/
-static inline
+static
void bm_access_range(struct bitmap* const bm,
const Addr a1, const Addr a2,
const BmAccessTypeT access_type)
@@ -134,12 +134,122 @@
}
}
+static inline
+void bm_access_aligned_load(struct bitmap* const bm,
+ const Addr a1, const Addr a2)
+{
+ struct bitmap2* bm2;
+
+#if 0
+ /* Commented out the statements below because of performance reasons. */
+ tl_assert(bm);
+ tl_assert(a1 < a2);
+ tl_assert((a2 - a1) == 1 || (a2 - a1) == 2
+ || (a2 - a1) == 4 || (a2 - a1) == 8);
+ tl_assert((a1 & (a2 - a1 - 1)) == 0);
+#endif
+
+ bm2 = bm2_lookup_or_insert(bm, a1 >> ADDR0_BITS);
+ tl_assert(bm2);
+
+ bm0_set_range(bm2->bm1.bm0_r, a1 & ADDR0_MASK, (a2 - 1) & ADDR0_MASK);
+}
+
+static inline
+void bm_access_aligned_store(struct bitmap* const bm,
+ const Addr a1, const Addr a2)
+{
+ struct bitmap2* bm2;
+
+#if 0
+ /* Commented out the statements below because of performance reasons. */
+ tl_assert(bm);
+ tl_assert(a1 < a2);
+ tl_assert((a2 - a1) == 1 || (a2 - a1) == 2
+ || (a2 - a1) == 4 || (a2 - a1) == 8);
+ tl_assert((a1 & (a2 - a1 - 1)) == 0);
+#endif
+
+ bm2 = bm2_lookup_or_insert(bm, a1 >> ADDR0_BITS);
+ tl_assert(bm2);
+
+ bm0_set_range(bm2->bm1.bm0_w, a1 & ADDR0_MASK, (a2 - 1) & ADDR0_MASK);
+}
+
void bm_access_range_load(struct bitmap* const bm,
const Addr a1, const Addr a2)
{
bm_access_range(bm, a1, a2, eLoad);
}
+void bm_access_load_1(struct bitmap* const bm, const Addr a1)
+{
+ bm_access_aligned_load(bm, a1, a1 + 1);
+}
+
+void bm_access_load_2(struct bitmap* const bm, const Addr a1)
+{
+ if ((a1 & 1) == 0)
+ bm_access_aligned_load(bm, a1, a1 + 2);
+ else
+ bm_access_range(bm, a1, a1 + 2, eLoad);
+}
+
+void bm_access_load_4(struct bitmap* const bm, const Addr a1)
+{
+ if ((a1 & 3) == 0)
+ bm_access_aligned_load(bm, a1, a1 + 4);
+ else
+ bm_access_range(bm, a1, a1 + 4, eLoad);
+}
+
+void bm_access_load_8(struct bitmap* const bm, const Addr a1)
+{
+ if ((a1 & 7) == 0)
+ bm_access_aligned_load(bm, a1, a1 + 8);
+ else if ((a1 & 3) == 0)
+ {
+ bm_access_aligned_load(bm, a1 + 0, a1 + 4);
+ bm_access_aligned_load(bm, a1 + 4, a1 + 8);
+ }
+ else
+ bm_access_range(bm, a1, a1 + 8, eLoad);
+}
+
+void bm_access_store_1(struct bitmap* const bm, const Addr a1)
+{
+ bm_access_aligned_store(bm, a1, a1 + 1);
+}
+
+void bm_access_store_2(struct bitmap* const bm, const Addr a1)
+{
+ if ((a1 & 1) == 0)
+ bm_access_aligned_store(bm, a1, a1 + 2);
+ else
+ bm_access_range(bm, a1, a1 + 2, eStore);
+}
+
+void bm_access_store_4(struct bitmap* const bm, const Addr a1)
+{
+ if ((a1 & 3) == 0)
+ bm_access_aligned_store(bm, a1, a1 + 4);
+ else
+ bm_access_range(bm, a1, a1 + 4, eStore);
+}
+
+void bm_access_store_8(struct bitmap* const bm, const Addr a1)
+{
+ if ((a1 & 7) == 0)
+ bm_access_aligned_store(bm, a1, a1 + 8);
+ else if ((a1 & 3) == 0)
+ {
+ bm_access_aligned_store(bm, a1 + 0, a1 + 4);
+ bm_access_aligned_store(bm, a1 + 4, a1 + 8);
+ }
+ else
+ bm_access_range(bm, a1, a1 + 8, eStore);
+}
+
void bm_access_range_store(struct bitmap* const bm,
const Addr a1, const Addr a2)
{
@@ -269,7 +379,7 @@
UWord mask;
#if 0
- // Commented out the assert statements below because of performance reasons.
+ /* Commented out the statements below because of performance reasons. */
tl_assert(a1);
tl_assert(a1 <= a2);
tl_assert(UWORD_MSB(a1) == UWORD_MSB(a2)
@@ -353,7 +463,6 @@
}
}
-inline
Bool bm_has_conflict_with(const struct bitmap* const bm,
const Addr a1, const Addr a2,
const BmAccessTypeT access_type)
@@ -420,12 +529,125 @@
return False;
}
+static inline
+Bool bm_aligned_load_has_conflict_with(const struct bitmap* const bm,
+ const Addr a1, const Addr a2)
+{
+ struct bitmap2* bm2;
+
+#if 0
+ /* Commented out the statements below because of performance reasons. */
+ tl_assert(bm);
+ tl_assert(a1 < a2);
+ tl_assert((a2 - a1) == 1 || (a2 - a1) == 2
+ || (a2 - a1) == 4 || (a2 - a1) == 8);
+ tl_assert((a1 & (a2 - a1 - 1)) == 0);
+#endif
+
+ bm2 = bm_lookup(bm, a1);
+
+ if (bm2
+ && bm0_is_any_set(bm2->bm1.bm0_w, a1 & ADDR0_MASK, (a2-1) & ADDR0_MASK))
+ {
+ return True;
+ }
+ return False;
+}
+
+static inline
+Bool bm_aligned_store_has_conflict_with(const struct bitmap* const bm,
+ const Addr a1, const Addr a2)
+{
+ struct bitmap2* bm2;
+
+#if 0
+ /* Commented out the statements below because of performance reasons. */
+ tl_assert(bm);
+ tl_assert(a1 < a2);
+ tl_assert((a2 - a1) == 1 || (a2 - a1) == 2
+ || (a2 - a1) == 4 || (a2 - a1) == 8);
+ tl_assert((a1 & (a2 - a1 - 1)) == 0);
+#endif
+
+ bm2 = bm_lookup(bm, a1);
+
+ if (bm2)
+ {
+ const struct bitmap1* const p1 = &bm2->bm1;
+
+ if (bm0_is_any_set(p1->bm0_r, a1 & ADDR0_MASK, (a2-1) & ADDR0_MASK)
+ | bm0_is_any_set(p1->bm0_w, a1 & ADDR0_MASK, (a2-1) & ADDR0_MASK))
+ {
+ return True;
+ }
+ }
+ return False;
+}
+
Bool bm_load_has_conflict_with(const struct bitmap* const bm,
const Addr a1, const Addr a2)
{
return bm_has_conflict_with(bm, a1, a2, eLoad);
}
+Bool bm_load_1_has_conflict_with(const struct bitmap* const bm, const Addr a1)
+{
+ return bm_aligned_load_has_conflict_with(bm, a1, a1 + 1);
+}
+
+Bool bm_load_2_has_conflict_with(const struct bitmap* const bm, const Addr a1)
+{
+ if ((a1 & 1) == 0)
+ return bm_aligned_load_has_conflict_with(bm, a1, a1 + 2);
+ else
+ return bm_has_conflict_with(bm, a1, a1 + 2, eLoad);
+}
+
+Bool bm_load_4_has_conflict_with(const struct bitmap* const bm, const Addr a1)
+{
+ if ((a1 & 3) == 0)
+ return bm_aligned_load_has_conflict_with(bm, a1, a1 + 4);
+ else
+ return bm_has_conflict_with(bm, a1, a1 + 4, eLoad);
+}
+
+Bool bm_load_8_has_conflict_with(const struct bitmap* const bm, const Addr a1)
+{
+ if ((a1 & 7) == 0)
+ return bm_aligned_load_has_conflict_with(bm, a1, a1 + 8);
+ else
+ return bm_has_conflict_with(bm, a1, a1 + 8, eLoad);
+}
+
+Bool bm_store_1_has_conflict_with(const struct bitmap* const bm, const Addr a1)
+{
+ return bm_aligned_store_has_conflict_with(bm, a1, a1 + 1);
+}
+
+Bool bm_store_2_has_conflict_with(const struct bitmap* const bm, const Addr a1)
+{
+ if ((a1 & 1) == 0)
+ return bm_aligned_store_has_conflict_with(bm, a1, a1 + 2);
+ else
+ return bm_has_conflict_with(bm, a1, a1 + 2, eStore);
+}
+
+Bool bm_store_4_has_conflict_with(const struct bitmap* const bm, const Addr a1)
+{
+ if ((a1 & 3) == 0)
+ return bm_aligned_store_has_conflict_with(bm, a1, a1 + 4);
+ else
+ return bm_has_conflict_with(bm, a1, a1 + 4, eStore);
+}
+
+Bool bm_store_8_has_conflict_with(const struct bitmap* const bm, const Addr a1)
+{
+ if ((a1 & 7) == 0)
+ return bm_aligned_store_has_conflict_with(bm, a1, a1 + 8);
+ else
+ return bm_has_conflict_with(bm, a1, a1 + 8, eStore);
+}
+
Bool bm_store_has_conflict_with(const struct bitmap* const bm,
const Addr a1, const Addr a2)
{
Modified: trunk/exp-drd/drd_bitmap.h
===================================================================
--- trunk/exp-drd/drd_bitmap.h 2008-03-13 20:33:29 UTC (rev 7677)
+++ trunk/exp-drd/drd_bitmap.h 2008-03-14 17:07:51 UTC (rev 7678)
@@ -42,7 +42,7 @@
#define ADDR0_BITS 12
-#define ADDR0_COUNT (1UL << ADDR0_BITS)
+#define ADDR0_COUNT ((UWord)1 << ADDR0_BITS)
#define ADDR0_MASK (ADDR0_COUNT - 1)
@@ -89,27 +89,53 @@
static __inline__ UWord bm0_mask(const Addr a)
{
- return (1UL << UWORD_LSB(a));
+ return ((UWord)1 << UWORD_LSB(a));
}
static __inline__ void bm0_set(UWord* bm0, const Addr a)
{
//tl_assert(a < ADDR0_COUNT);
- bm0[a >> BITS_PER_BITS_PER_UWORD] |= 1UL << UWORD_LSB(a);
+ bm0[a >> BITS_PER_BITS_PER_UWORD] |= (UWord)1 << UWORD_LSB(a);
}
+/** Set all of the addresses in range a1..a2 (inclusive) in bitmap bm0. */
+static __inline__ void bm0_set_range(UWord* bm0, const Addr a1, const Addr a2)
+{
+#if 0
+ tl_assert(a1 < ADDR0_COUNT);
+ tl_assert(a2 < ADDR0_COUNT);
+ tl_assert(a1 <= a2);
+ tl_assert(UWORD_MSB(a1) == UWORD_MSB(a2));
+#endif
+ bm0[a1 >> BITS_PER_BITS_PER_UWORD]
+ |= ((UWord)2 << UWORD_LSB(a2)) - ((UWord)1 << UWORD_LSB(a1));
+}
+
static __inline__ void bm0_clear(UWord* bm0, const Addr a)
{
//tl_assert(a < ADDR0_COUNT);
- bm0[a >> BITS_PER_BITS_PER_UWORD] &= ~(1UL << UWORD_LSB(a));
+ bm0[a >> BITS_PER_BITS_PER_UWORD] &= ~((UWord)1 << UWORD_LSB(a));
}
static __inline__ UWord bm0_is_set(const UWord* bm0, const Addr a)
{
//tl_assert(a < ADDR0_COUNT);
- return (bm0[a >> BITS_PER_BITS_PER_UWORD] & (1UL << UWORD_LSB(a)));
+ return (bm0[a >> BITS_PER_BITS_PER_UWORD] & ((UWord)1 << UWORD_LSB(a)));
}
+/** Return true if any of the bits a1..a2 (inclusive) are set in bm0. */
+static __inline__ UWord bm0_is_any_set(const UWord* bm0,
+ const Addr a1, const Addr a2)
+{
+#if 0
+ tl_assert(a1 < ADDR0_COUNT);
+ tl_assert(a2 < ADDR0_COUNT);
+ tl_assert(a1 <= a2);
+ tl_assert(UWORD_MSB(a1) == UWORD_MSB(a2));
+#endif
+ return (bm0[a1 >> BITS_PER_BITS_PER_UWORD]
+ & (((UWord)2 << UWORD_LSB(a2)) - ((UWord)1 << UWORD_LSB(a1))));
+}
struct bitmap2
{
@@ -120,40 +146,67 @@
/* Complete bitmap. */
struct bitmap
{
- OSet* oset;
+ Addr last_lookup_a1;
+ struct bitmap2* last_lookup_result;
+ OSet* oset;
};
static __inline__
struct bitmap2* bm_lookup(const struct bitmap* const bm, const Addr a)
{
+ struct bitmap2* result;
const UWord a1 = a >> ADDR0_BITS;
- return VG_(OSetGen_Lookup)(bm->oset, &a1);
+ if (a1 == bm->last_lookup_a1)
+ {
+ //tl_assert(bm->last_lookup_result == VG_(OSetGen_Lookup)(bm->oset, &a1));
+ return bm->last_lookup_result;
+ }
+ result = VG_(OSetGen_Lookup)(bm->oset,&a1);
+ if (result)
+ {
+ ((struct bitmap*)bm)->last_lookup_a1 = a1;
+ ((struct bitmap*)bm)->last_lookup_result = result;
+ }
+ return result;
}
static __inline__
struct bitmap2* bm2_insert(const struct bitmap* const bm,
const UWord a1)
{
- struct bitmap2* const node = VG_(OSetGen_AllocNode)(bm->oset, sizeof(*node));
- node->addr = a1;
- VG_(memset)(&node->bm1, 0, sizeof(node->bm1));
- VG_(OSetGen_Insert)(bm->oset, node);
-
- s_bitmap2_creation_count++;
-
- return node;
+ struct bitmap2* const node = VG_(OSetGen_AllocNode)(bm->oset, sizeof(*node));
+ node->addr = a1;
+ VG_(memset)(&node->bm1, 0, sizeof(node->bm1));
+ VG_(OSetGen_Insert)(bm->oset, node);
+
+ ((struct bitmap*)bm)->last_lookup_a1 = a1;
+ ((struct bitmap*)bm)->last_lookup_result = node;
+
+ s_bitmap2_creation_count++;
+
+ return node;
}
static __inline__
struct bitmap2* bm2_lookup_or_insert(const struct bitmap* const bm,
const UWord a1)
{
- struct bitmap2* p2 = VG_(OSetGen_Lookup)(bm->oset, &a1);
- if (p2 == 0)
- {
- p2 = bm2_insert(bm, a1);
- }
- return p2;
+ struct bitmap2* p2;
+
+ if (a1 == bm->last_lookup_a1)
+ {
+ //tl_assert(bm->last_lookup_result == VG_(OSetGen_Lookup)(bm->oset, &a1));
+ return bm->last_lookup_result;
+ }
+
+ p2 = VG_(OSetGen_Lookup)(bm->oset, &a1);
+ if (p2 == 0)
+ {
+ p2 = bm2_insert(bm, a1);
+ }
+ ((struct bitmap*)bm)->last_lookup_a1 = a1;
+ ((struct bitmap*)bm)->last_lookup_result = p2;
+ return p2;
}
Modified: trunk/exp-drd/drd_main.c
===================================================================
--- trunk/exp-drd/drd_main.c 2008-03-13 20:33:29 UTC (rev 7677)
+++ trunk/exp-drd/drd_main.c 2008-03-14 17:07:51 UTC (rev 7678)
@@ -148,12 +148,47 @@
// Implements the thread-related core callbacks.
//
-static
-VG_REGPARM(2) void drd_trace_load(Addr addr, SizeT size)
+static void drd_trace_mem_access(const Addr addr, const SizeT size,
+ const BmAccessTypeT access_type)
{
+ char vc[80];
+ vc_snprint(vc, sizeof(vc), thread_get_vc(thread_get_running_tid()));
+ VG_(message)(Vg_UserMsg,
+ "%s 0x%lx size %ld %s (vg %d / drd %d / vc %s)",
+ access_type == eLoad ? "load " : "store",
+ addr,
+ size,
+ thread_get_name(thread_get_running_tid()),
+ VG_(get_running_tid)(),
+ thread_get_running_tid(),
+ vc);
+ VG_(get_and_pp_StackTrace)(VG_(get_running_tid)(),
+ VG_(clo_backtrace_size));
+ tl_assert(DrdThreadIdToVgThreadId(thread_get_running_tid())
+ == VG_(get_running_tid)());
+}
+
+static void drd_report_race(const Addr addr, const SizeT size,
+ const BmAccessTypeT access_type)
+{
+ DataRaceErrInfo drei;
+ drei.tid = VG_(get_running_tid)();
+ drei.addr = addr;
+ drei.size = size;
+ drei.access_type = access_type;
+ VG_(maybe_record_error)(VG_(get_running_tid)(),
+ DataRaceErr,
+ VG_(get_IP)(VG_(get_running_tid)()),
+ "Conflicting accesses",
+ &drei);
+}
+
+static VG_REGPARM(2) void drd_trace_load(Addr addr, SizeT size)
+{
Segment* sg;
#if 0
+ /* The assert below has been commented out because of performance reasons.*/
tl_assert(thread_get_running_tid()
== VgThreadIdToDrdThreadId(VG_(get_running_tid())));
#endif
@@ -161,48 +196,106 @@
if (! running_thread_is_recording())
return;
-#if 1
if (drd_trace_mem || (addr == drd_trace_address))
{
- char vc[80];
- vc_snprint(vc, sizeof(vc), thread_get_vc(thread_get_running_tid()));
- VG_(message)(Vg_UserMsg, "load 0x%lx size %ld %s (vg %d / drd %d / vc %s)",
- addr,
- size,
- thread_get_name(thread_get_running_tid()),
- VG_(get_running_tid)(),
- thread_get_running_tid(),
- vc);
- VG_(get_and_pp_StackTrace)(VG_(get_running_tid)(),
- VG_(clo_backtrace_size));
- tl_assert(DrdThreadIdToVgThreadId(thread_get_running_tid())
- == VG_(get_running_tid)());
+ drd_trace_mem_access(addr, size, eLoad);
}
-#endif
- sg = thread_get_segment(thread_get_running_tid());
+ sg = running_thread_get_segment();
bm_access_range_load(sg->bm, addr, addr + size);
if (bm_load_has_conflict_with(thread_get_danger_set(), addr, addr + size)
&& ! drd_is_suppressed(addr, addr + size))
{
- DataRaceErrInfo drei;
- drei.tid = VG_(get_running_tid)();
- drei.addr = addr;
- drei.size = size;
- drei.access_type = eLoad;
- VG_(maybe_record_error)(VG_(get_running_tid)(),
- DataRaceErr,
- VG_(get_IP)(VG_(get_running_tid)()),
- "Conflicting accesses",
- &drei);
+ drd_report_race(addr, size, eLoad);
}
}
+static VG_REGPARM(1) void drd_trace_load_1(Addr addr)
+{
+ Segment* sg;
+
+ if (! running_thread_is_recording())
+ return;
+
+ if (drd_trace_mem || (addr == drd_trace_address))
+ {
+ drd_trace_mem_access(addr, 1, eLoad);
+ }
+ sg = running_thread_get_segment();
+ bm_access_load_1(sg->bm, addr);
+ if (bm_load_1_has_conflict_with(thread_get_danger_set(), addr)
+ && ! drd_is_suppressed(addr, addr + 1))
+ {
+ drd_report_race(addr, 1, eLoad);
+ }
+}
+
+static VG_REGPARM(1) void drd_trace_load_2(Addr addr)
+{
+ Segment* sg;
+
+ if (! running_thread_is_recording())
+ return;
+
+ if (drd_trace_mem || (addr == drd_trace_address))
+ {
+ drd_trace_mem_access(addr, 2, eLoad);
+ }
+ sg = running_thread_get_segment();
+ bm_access_load_2(sg->bm, addr);
+ if (bm_load_2_has_conflict_with(thread_get_danger_set(), addr)
+ && ! drd_is_suppressed(addr, addr + 2))
+ {
+ drd_report_race(addr, 2, eLoad);
+ }
+}
+
+static VG_REGPARM(1) void drd_trace_load_4(Addr addr)
+{
+ Segment* sg;
+
+ if (! running_thread_is_recording())
+ return;
+
+ if (drd_trace_mem || (addr == drd_trace_address))
+ {
+ drd_trace_mem_access(addr, 4, eLoad);
+ }
+ sg = running_thread_get_segment();
+ bm_access_load_4(sg->bm, addr);
+ if (bm_load_4_has_conflict_with(thread_get_danger_set(), addr)
+ && ! drd_is_suppressed(addr, addr + 4))
+ {
+ drd_report_race(addr, 4, eLoad);
+ }
+}
+
+static VG_REGPARM(1) void drd_trace_load_8(Addr addr)
+{
+ Segment* sg;
+
+ if (! running_thread_is_recording())
+ return;
+
+ if (drd_trace_mem || (addr == drd_trace_address))
+ {
+ drd_trace_mem_access(addr, 8, eLoad);
+ }
+ sg = running_thread_get_segment();
+ bm_access_load_8(sg->bm, addr);
+ if (bm_load_8_has_conflict_with(thread_get_danger_set(), addr)
+ && ! drd_is_suppressed(addr, addr + 8))
+ {
+ drd_report_race(addr, 8, eLoad);
+ }
+}
+
static
VG_REGPARM(2) void drd_trace_store(Addr addr, SizeT size)
{
Segment* sg;
#if 0
+ /* The assert below has been commented out because of performance reasons.*/
tl_assert(thread_get_running_tid()
== VgThreadIdToDrdThreadId(VG_(get_running_tid())));
#endif
@@ -210,43 +303,99 @@
if (! running_thread_is_recording())
return;
-#if 1
if (drd_trace_mem || (addr == drd_trace_address))
{
- char vc[80];
- vc_snprint(vc, sizeof(vc), thread_get_vc(thread_get_running_tid()));
- VG_(message)(Vg_UserMsg, "store 0x%lx size %ld %s (vg %d / drd %d / off %d / vc %s)",
- addr,
- size,
- thread_get_name(thread_get_running_tid()),
- VG_(get_running_tid)(),
- thread_get_running_tid(),
- addr - thread_get_stack_min(thread_get_running_tid()),
- vc);
- VG_(get_and_pp_StackTrace)(VG_(get_running_tid)(),
- VG_(clo_backtrace_size));
- tl_assert(DrdThreadIdToVgThreadId(thread_get_running_tid())
- == VG_(get_running_tid)());
+ drd_trace_mem_access(addr, size, eStore);
}
-#endif
- sg = thread_get_segment(thread_get_running_tid());
+ sg = running_thread_get_segment();
bm_access_range_store(sg->bm, addr, addr + size);
if (bm_store_has_conflict_with(thread_get_danger_set(), addr, addr + size)
&& ! drd_is_suppressed(addr, addr + size))
{
- DataRaceErrInfo drei;
- drei.tid = VG_(get_running_tid)();
- drei.addr = addr;
- drei.size = size;
- drei.access_type = eStore;
- VG_(maybe_record_error)(VG_(get_running_tid)(),
- DataRaceErr,
- VG_(get_IP)(VG_(get_running_tid)()),
- "Conflicting accesses",
- &drei);
+ drd_report_race(addr, size, eStore);
}
}
+static VG_REGPARM(1) void drd_trace_store_1(Addr addr)
+{
+ Segment* sg;
+
+ if (! running_thread_is_recording())
+ return;
+
+ if (drd_trace_mem || (addr == drd_trace_address))
+ {
+ drd_trace_mem_access(addr, 1, eStore);
+ }
+ sg = running_thread_get_segment();
+ bm_access_store_1(sg->bm, addr);
+ if (bm_store_1_has_conflict_with(thread_get_danger_set(), addr)
+ && ! drd_is_suppressed(addr, addr + 1))
+ {
+ drd_report_race(addr, 1, eStore);
+ }
+}
+
+static VG_REGPARM(1) void drd_trace_store_2(Addr addr)
+{
+ Segment* sg;
+
+ if (! running_thread_is_recording())
+ return;
+
+ if (drd_trace_mem || (addr == drd_trace_address))
+ {
+ drd_trace_mem_access(addr, 2, eStore);
+ }
+ sg = running_thread_get_segment();
+ bm_access_store_2(sg->bm, addr);
+ if (bm_store_2_has_conflict_with(thread_get_danger_set(), addr)
+ && ! drd_is_suppressed(addr, addr + 2))
+ {
+ drd_report_race(addr, 2, eStore);
+ }
+}
+
+static VG_REGPARM(1) void drd_trace_store_4(Addr addr)
+{
+ Segment* sg;
+
+ if (! running_thread_is_recording())
+ return;
+
+ if (drd_trace_mem || (addr == drd_trace_address))
+ {
+ drd_trace_mem_access(addr, 4, eStore);
+ }
+ sg = running_thread_get_segment();
+ bm_access_store_4(sg->bm, addr);
+ if (bm_store_4_has_conflict_with(thread_get_danger_set(), addr)
+ && ! drd_is_suppressed(addr, addr + 4))
+ {
+ drd_report_race(addr, 4, eStore);
+ }
+}
+
+static VG_REGPARM(1) void drd_trace_store_8(Addr addr)
+{
+ Segment* sg;
+
+ if (! running_thread_is_recording())
+ return;
+
+ if (drd_trace_mem || (addr == drd_trace_address))
+ {
+ drd_trace_mem_access(addr, 8, eStore);
+ }
+ sg = running_thread_get_segment();
+ bm_access_store_8(sg->bm, addr);
+ if (bm_store_8_has_conflict_with(thread_get_danger_set(), addr)
+ && ! drd_is_suppressed(addr, addr + 8))
+ {
+ drd_report_race(addr, 8, eStore);
+ }
+}
+
static void drd_pre_mem_read(const CorePart part,
const ThreadId tid,
Char* const s,
@@ -572,6 +721,106 @@
# endif
}
+static void instrument_load(IRSB* const bb,
+ IRExpr* const addr_expr,
+ const HWord size)
+{
+ IRExpr* size_expr;
+ IRExpr** argv;
+ IRDirty* di;
+
+ switch (size)
+ {
+ case 1:
+ argv = mkIRExprVec_1(addr_expr);
+ di = unsafeIRDirty_0_N(/*regparms*/1,
+ "drd_trace_load_1",
+ VG_(fnptr_to_fnentry)(drd_trace_load_1),
+ argv);
+ break;
+ case 2:
+ argv = mkIRExprVec_1(addr_expr);
+ di = unsafeIRDirty_0_N(/*regparms*/1,
+ "drd_trace_load_2",
+ VG_(fnptr_to_fnentry)(drd_trace_load_2),
+ argv);
+ break;
+ case 4:
+ argv = mkIRExprVec_1(addr_expr);
+ di = unsafeIRDirty_0_N(/*regparms*/1,
+ "drd_trace_load_4",
+ VG_(fnptr_to_fnentry)(drd_trace_load_4),
+ argv);
+ break;
+ case 8:
+ argv = mkIRExprVec_1(addr_expr);
+ di = unsafeIRDirty_0_N(/*regparms*/1,
+ "drd_trace_load_8",
+ VG_(fnptr_to_fnentry)(drd_trace_load_8),
+ argv);
+ break;
+ default:
+ size_expr = mkIRExpr_HWord(size);
+ argv = mkIRExprVec_2(addr_expr, size_expr);
+ di = unsafeIRDirty_0_N(/*regparms*/2,
+ "drd_trace_load",
+ VG_(fnptr_to_fnentry)(drd_trace_load),
+ argv);
+ break;
+ }
+ addStmtToIRSB(bb, IRStmt_Dirty(di));
+}
+
+static void instrument_store(IRSB* const bb,
+ IRExpr* const addr_expr,
+ const HWord size)
+{
+ IRExpr* size_expr;
+ IRExpr** argv;
+ IRDirty* di;
+
+ switch (size)
+ {
+ case 1:
+ argv = mkIRExprVec_1(addr_expr);
+ di = unsafeIRDirty_0_N(/*regparms*/1,
+ "drd_trace_store_1",
+ VG_(fnptr_to_fnentry)(drd_trace_store_1),
+ argv);
+ break;
+ case 2:
+ argv = mkIRExprVec_1(addr_expr);
+ di = unsafeIRDirty_0_N(/*regparms*/1,
+ "drd_trace_store_2",
+ VG_(fnptr_to_fnentry)(drd_trace_store_2),
+ argv);
+ break;
+ case 4:
+ argv = mkIRExprVec_1(addr_expr);
+ di = unsafeIRDirty_0_N(/*regparms*/1,
+ "drd_trace_store_4",
+ VG_(fnptr_to_fnentry)(drd_trace_store_4),
+ argv);
+ break;
+ case 8:
+ argv = mkIRExprVec_1(addr_expr);
+ di = unsafeIRDirty_0_N(/*regparms*/1,
+ "drd_trace_store_8",
+ VG_(fnptr_to_fnentry)(drd_trace_store_8),
+ argv);
+ break;
+ default:
+ size_expr = mkIRExpr_HWord(size);
+ argv = mkIRExprVec_2(addr_expr, size_expr);
+ di = unsafeIRDirty_0_N(/*regparms*/2,
+ "drd_trace_store",
+ VG_(fnptr_to_fnentry)(drd_trace_store),
+ argv);
+ break;
+ }
+ addStmtToIRSB(bb, IRStmt_Dirty(di));
+}
+
static
IRSB* drd_instrument(VgCallbackClosure* const closure,
IRSB* const bb_in,
@@ -584,8 +833,6 @@
Int i;
IRSB* bb;
IRExpr** argv;
- IRExpr* addr_expr;
- IRExpr* size_expr;
Bool instrument = True;
Bool bus_locked = False;
@@ -631,15 +878,10 @@
case Ist_Store:
if (instrument && ! bus_locked)
{
- addr_expr = st->Ist.Store.addr;
- size_expr = mkIRExpr_HWord(
- sizeofIRType(typeOfIRExpr(bb->tyenv, st->Ist.Store.data)));
- argv = mkIRExprVec_2(addr_expr, size_expr);
- di = unsafeIRDirty_0_N(/*regparms*/2,
- "drd_trace_store",
- VG_(fnptr_to_fnentry)(drd_trace_store),
- argv);
- addStmtToIRSB(bb, IRStmt_Dirty(di));
+ instrument_store(bb,
+ st->Ist.Store.addr,
+ sizeofIRType(typeOfIRExpr(bb->tyenv,
+ st->Ist.Store.data)));
}
addStmtToIRSB(bb, st);
break;
@@ -650,14 +892,9 @@
const IRExpr* const data = st->Ist.WrTmp.data;
if (data->tag == Iex_Load)
{
- addr_expr = data->Iex.Load.addr;
- size_expr = mkIRExpr_HWord(sizeofIRType(data->Iex.Load.ty));
- argv = mkIRExprVec_2(addr_expr, size_expr);
- di = unsafeIRDirty_0_N(/*regparms*/2,
- "drd_trace_load",
- VG_(fnptr_to_fnentry)(drd_trace_load),
- argv);
- addStmtToIRSB(bb, IRStmt_Dirty(di));
+ instrument_load(bb,
+ data->Iex.Load.addr,
+ sizeofIRType(data->Iex.Load.ty));
}
}
addStmtToIRSB(bb, st);
Modified: trunk/exp-drd/drd_thread.h
===================================================================
--- trunk/exp-drd/drd_thread.h 2008-03-13 20:33:29 UTC (rev 7677)
+++ trunk/exp-drd/drd_thread.h 2008-03-14 17:07:51 UTC (rev 7678)
@@ -171,5 +171,11 @@
return s_threadinfo[tid].last;
}
+/** Return a pointer to the latest segment for the running thread. */
+static inline
+Segment* running_thread_get_segment(void)
+{
+ return thread_get_segment(s_drd_running_tid);
+}
#endif // __THREAD_H
Modified: trunk/exp-drd/pub_drd_bitmap.h
===================================================================
--- trunk/exp-drd/pub_drd_bitmap.h 2008-03-13 20:33:29 UTC (rev 7677)
+++ trunk/exp-drd/pub_drd_bitmap.h 2008-03-14 17:07:51 UTC (rev 7678)
@@ -58,6 +58,14 @@
void bm_delete(struct bitmap* const bm);
void bm_access_range_load(struct bitmap* const bm,
const Addr a1, const Addr a2);
+void bm_access_load_1(struct bitmap* const bm, const Addr a1);
+void bm_access_load_2(struct bitmap* const bm, const Addr a1);
+void bm_access_load_4(struct bitmap* const bm, const Addr a1);
+void bm_access_load_8(struct bitmap* const bm, const Addr a1);
+void bm_access_store_1(struct bitmap* const bm, const Addr a1);
+void bm_access_store_2(struct bitmap* const bm, const Addr a1);
+void bm_access_store_4(struct bitmap* const bm, const Addr a1);
+void bm_access_store_8(struct bitmap* const bm, const Addr a1);
void bm_access_range_store(struct bitmap* const bm,
const Addr a1, const Addr a2);
Bool bm_has(const struct bitmap* const bm,
@@ -76,8 +84,16 @@
Bool bm_has_conflict_with(const struct bitmap* const bm,
const Addr a1, const Addr a2,
const BmAccessTypeT access_type);
+Bool bm_load_1_has_conflict_with(const struct bitmap* const bm, const Addr a1);
+Bool bm_load_2_has_conflict_with(const struct bitmap* const bm, const Addr a1);
+Bool bm_load_4_has_conflict_with(const struct bitmap* const bm, const Addr a1);
+Bool bm_load_8_has_conflict_with(const struct bitmap* const bm, const Addr a1);
Bool bm_load_has_conflict_with(const struct bitmap* const bm,
const Addr a1, const Addr a2);
+Bool bm_store_1_has_conflict_with(const struct bitmap* const bm,const Addr a1);
+Bool bm_store_2_has_conflict_with(const struct bitmap* const bm,const Addr a1);
+Bool bm_store_4_has_conflict_with(const struct bitmap* const bm,const Addr a1);
+Bool bm_store_8_has_conflict_with(const struct bitmap* const bm,const Addr a1);
Bool bm_store_has_conflict_with(const struct bitmap* const bm,
const Addr a1, const Addr a2);
void bm_swap(struct bitmap* const bm1, struct bitmap* const bm2);
|
|
From: Tom H. <th...@cy...> - 2008-03-14 06:15:46
|
Nightly build on alvis ( i686, Red Hat 7.3 ) started at 2008-03-14 03:15:03 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 378 tests, 78 stderr failures, 1 stdout failure, 29 post failures == memcheck/tests/addressable (stderr) memcheck/tests/badjump (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/erringfds (stderr) memcheck/tests/leak-0 (stderr) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-pool-0 (stderr) memcheck/tests/leak-pool-1 (stderr) memcheck/tests/leak-pool-2 (stderr) memcheck/tests/leak-pool-3 (stderr) memcheck/tests/leak-pool-4 (stderr) memcheck/tests/leak-pool-5 (stderr) memcheck/tests/leak-regroot (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/long_namespace_xml (stderr) memcheck/tests/lsframe1 (stderr) memcheck/tests/lsframe2 (stderr) memcheck/tests/malloc_free_fill (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/noisy_child (stderr) memcheck/tests/partial_load_dflt (stderr) memcheck/tests/partial_load_ok (stderr) memcheck/tests/partiallydefinedeq (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/sigkill (stderr) memcheck/tests/stack_changes (stderr) memcheck/tests/varinfo1 (stderr) memcheck/tests/varinfo2 (stderr) memcheck/tests/varinfo3 (stderr) memcheck/tests/varinfo4 (stderr) memcheck/tests/varinfo5 (stderr) memcheck/tests/varinfo6 (stderr) memcheck/tests/x86/bug152022 (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) memcheck/tests/x86/xor-undef-x86 (stderr) memcheck/tests/xml1 (stderr) massif/tests/alloc-fns-A (post) massif/tests/alloc-fns-B (post) massif/tests/basic (post) massif/tests/basic2 (post) massif/tests/big-alloc (post) massif/tests/culling1 (stderr) massif/tests/culling2 (stderr) massif/tests/custom_alloc (post) massif/tests/deep-A (post) massif/tests/deep-B (stderr) massif/tests/deep-B (post) massif/tests/deep-C (stderr) massif/tests/deep-C (post) massif/tests/deep-D (post) massif/tests/ignoring (post) massif/tests/insig (post) massif/tests/long-names (post) massif/tests/long-time (post) massif/tests/new-cpp (post) massif/tests/null (post) massif/tests/one (post) massif/tests/overloaded-new (post) massif/tests/peak (post) massif/tests/peak2 (stderr) massif/tests/peak2 (post) massif/tests/realloc (stderr) massif/tests/realloc (post) massif/tests/thresholds_0_0 (post) massif/tests/thresholds_0_10 (post) massif/tests/thresholds_10_0 (post) massif/tests/thresholds_10_10 (post) massif/tests/thresholds_5_0 (post) massif/tests/thresholds_5_10 (post) massif/tests/zero1 (post) massif/tests/zero2 (post) none/tests/blockfault (stderr) none/tests/faultstatus (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) none/tests/shell (stderr) none/tests/shell_valid1 (stderr) none/tests/shell_valid2 (stderr) none/tests/shell_valid3 (stderr) helgrind/tests/hg01_all_ok (stderr) helgrind/tests/hg02_deadlock (stderr) helgrind/tests/hg03_inherit (stderr) helgrind/tests/hg04_race (stderr) helgrind/tests/hg05_race2 (stderr) helgrind/tests/hg06_readshared (stderr) helgrind/tests/tc01_simple_race (stderr) helgrind/tests/tc02_simple_tls (stderr) helgrind/tests/tc03_re_excl (stderr) helgrind/tests/tc05_simple_race (stderr) helgrind/tests/tc06_two_races (stderr) helgrind/tests/tc07_hbl1 (stderr) helgrind/tests/tc08_hbl2 (stderr) helgrind/tests/tc09_bad_unlock (stderr) helgrind/tests/tc11_XCHG (stderr) helgrind/tests/tc12_rwl_trivial (stderr) helgrind/tests/tc14_laog_dinphils (stderr) helgrind/tests/tc16_byterace (stderr) helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc19_shadowmem (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) helgrind/tests/tc23_bogus_condwait (stderr) helgrind/tests/tc24_nonzero_sem (stderr) exp-drd/tests/tc09_bad_unlock (stderr) exp-drd/tests/tc12_rwl_trivial (stderr) |
|
From: Tom H. <th...@cy...> - 2008-03-14 04:27:44
|
Nightly build on lloyd ( x86_64, Fedora 7 ) started at 2008-03-14 03:05:06 GMT Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 415 tests, 10 stderr failures, 2 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/scalar (stderr) none/tests/faultstatus (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc22_exit_w_lock (stderr) exp-drd/tests/omp_matinv (stderr) exp-drd/tests/omp_matinv_racy (stderr) exp-drd/tests/omp_prime_racy (stderr) exp-drd/tests/pth_detached2 (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 415 tests, 9 stderr failures, 2 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/scalar (stderr) none/tests/faultstatus (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc22_exit_w_lock (stderr) exp-drd/tests/omp_matinv (stderr) exp-drd/tests/omp_matinv_racy (stderr) exp-drd/tests/omp_prime_racy (stderr) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Fri Mar 14 03:46:06 2008 --- new.short Fri Mar 14 04:27:43 2008 *************** *** 8,10 **** ! == 415 tests, 9 stderr failures, 2 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) --- 8,10 ---- ! == 415 tests, 10 stderr failures, 2 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) *************** *** 20,21 **** --- 20,22 ---- exp-drd/tests/omp_prime_racy (stderr) + exp-drd/tests/pth_detached2 (stderr) |
|
From: Tom H. <th...@cy...> - 2008-03-14 03:53:32
|
Nightly build on aston ( x86_64, Fedora Core 5 ) started at 2008-03-14 03:20:06 GMT Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 421 tests, 13 stderr failures, 1 stdout failure, 0 post failures == memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/x86/scalar (stderr) none/tests/blockfault (stderr) none/tests/faultstatus (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) exp-drd/tests/omp_matinv (stderr) exp-drd/tests/omp_matinv_racy (stderr) exp-drd/tests/omp_prime_racy (stderr) exp-drd/tests/pth_detached2 (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 421 tests, 12 stderr failures, 1 stdout failure, 0 post failures == memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/x86/scalar (stderr) none/tests/blockfault (stderr) none/tests/faultstatus (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) exp-drd/tests/omp_matinv (stderr) exp-drd/tests/omp_matinv_racy (stderr) exp-drd/tests/omp_prime_racy (stderr) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Fri Mar 14 03:36:49 2008 --- new.short Fri Mar 14 03:53:35 2008 *************** *** 8,10 **** ! == 421 tests, 12 stderr failures, 1 stdout failure, 0 post failures == memcheck/tests/malloc_free_fill (stderr) --- 8,10 ---- ! == 421 tests, 13 stderr failures, 1 stdout failure, 0 post failures == memcheck/tests/malloc_free_fill (stderr) *************** *** 22,23 **** --- 22,24 ---- exp-drd/tests/omp_prime_racy (stderr) + exp-drd/tests/pth_detached2 (stderr) |
|
From: Tom H. <th...@cy...> - 2008-03-14 03:40:12
|
Nightly build on trojan ( x86_64, Fedora Core 6 ) started at 2008-03-14 03:25:05 GMT Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 419 tests, 13 stderr failures, 5 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/bug133694 (stdout) memcheck/tests/x86/bug133694 (stderr) memcheck/tests/x86/scalar (stderr) none/tests/cmdline1 (stdout) none/tests/cmdline2 (stdout) none/tests/faultstatus (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) exp-drd/tests/omp_matinv (stderr) exp-drd/tests/omp_matinv_racy (stderr) exp-drd/tests/omp_prime_racy (stderr) exp-drd/tests/pth_detached2 (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 419 tests, 12 stderr failures, 5 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/bug133694 (stdout) memcheck/tests/x86/bug133694 (stderr) memcheck/tests/x86/scalar (stderr) none/tests/cmdline1 (stdout) none/tests/cmdline2 (stdout) none/tests/faultstatus (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) exp-drd/tests/omp_matinv (stderr) exp-drd/tests/omp_matinv_racy (stderr) exp-drd/tests/omp_prime_racy (stderr) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Fri Mar 14 03:32:40 2008 --- new.short Fri Mar 14 03:40:13 2008 *************** *** 8,10 **** ! == 419 tests, 12 stderr failures, 5 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) --- 8,10 ---- ! == 419 tests, 13 stderr failures, 5 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) *************** *** 26,27 **** --- 26,28 ---- exp-drd/tests/omp_prime_racy (stderr) + exp-drd/tests/pth_detached2 (stderr) |
|
From: Tom H. <th...@cy...> - 2008-03-14 03:33:52
|
Nightly build on dellow ( x86_64, Fedora 8 ) started at 2008-03-14 03:10:03 GMT Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 415 tests, 12 stderr failures, 2 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/scalar (stderr) none/tests/faultstatus (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) exp-drd/tests/omp_matinv (stderr) exp-drd/tests/omp_matinv_racy (stderr) exp-drd/tests/omp_prime_racy (stderr) exp-drd/tests/pth_detached2 (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 415 tests, 11 stderr failures, 2 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/scalar (stderr) none/tests/faultstatus (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) exp-drd/tests/omp_matinv (stderr) exp-drd/tests/omp_matinv_racy (stderr) exp-drd/tests/omp_prime_racy (stderr) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Fri Mar 14 03:22:00 2008 --- new.short Fri Mar 14 03:33:54 2008 *************** *** 8,10 **** ! == 415 tests, 11 stderr failures, 2 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) --- 8,10 ---- ! == 415 tests, 12 stderr failures, 2 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) *************** *** 22,23 **** --- 22,24 ---- exp-drd/tests/omp_prime_racy (stderr) + exp-drd/tests/pth_detached2 (stderr) |
|
From: Tom H. <th...@cy...> - 2008-03-14 03:17:26
|
Nightly build on gill ( x86_64, Fedora Core 2 ) started at 2008-03-14 03:00:05 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 418 tests, 32 stderr failures, 3 stdout failures, 0 post failures == memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/varinfo6 (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) none/tests/amd64/insn_ssse3 (stdout) none/tests/amd64/insn_ssse3 (stderr) none/tests/amd64/ssse3_misaligned (stderr) none/tests/blockfault (stderr) none/tests/faultstatus (stderr) none/tests/fdleak_fcntl (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) none/tests/x86/insn_ssse3 (stdout) none/tests/x86/insn_ssse3 (stderr) none/tests/x86/ssse3_misaligned (stderr) helgrind/tests/hg01_all_ok (stderr) helgrind/tests/hg02_deadlock (stderr) helgrind/tests/hg03_inherit (stderr) helgrind/tests/hg04_race (stderr) helgrind/tests/hg05_race2 (stderr) helgrind/tests/tc01_simple_race (stderr) helgrind/tests/tc05_simple_race (stderr) helgrind/tests/tc06_two_races (stderr) helgrind/tests/tc09_bad_unlock (stderr) helgrind/tests/tc14_laog_dinphils (stderr) helgrind/tests/tc16_byterace (stderr) helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc19_shadowmem (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) helgrind/tests/tc23_bogus_condwait (stderr) exp-drd/tests/pth_create_chain (stderr) |