You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
1
(17) |
2
(11) |
3
(6) |
4
(6) |
|
5
(10) |
6
(5) |
7
(3) |
8
(7) |
9
(4) |
10
(4) |
11
(3) |
|
12
(3) |
13
(17) |
14
(18) |
15
(32) |
16
(22) |
17
(18) |
18
(10) |
|
19
(4) |
20
(3) |
21
(8) |
22
(15) |
23
(32) |
24
(28) |
25
(18) |
|
26
(20) |
27
(16) |
28
(28) |
29
(28) |
30
(27) |
|
|
|
From: Josef W. <Jos...@gm...> - 2009-04-18 22:56:10
|
On Saturday 18 April 2009, Dominic Account wrote:
> I hope I did not introduce new bugs here. Please, review
> my patch.
Just a minor remark: inlining the patch would have made it easier.
> +void log_1I_1Dm_cache_access(InstrInfo* n, Addr data_addr, Word data_size)
> +{
> + //VG_(printf)("1I_1Dm: CCaddr=0x%010lx, iaddr=0x%010lx, isize=%lu\n"
> + // " daddr=0x%010lx, dsize=%lu\n",
> + // n, n->instr_addr, n->instr_len, data_addr, data_size);
> + cachesim_I1_doref(n->instr_addr, n->instr_len,
> + &n->parent->Ir.m1, &n->parent->Ir.m2);
> + n->parent->Ir.a++;
> +
> + cachesim_D1_doref(data_addr, data_size,
> + &n->parent->Dr.m1, &n->parent->Dr.m2);
> + cachesim_D1_doref(data_addr, data_size,
> + &n->parent->Dw.m1, &n->parent->Dw.m2);
Given the cache model and the fact that no other thread can access the cache
inbetween, the second call into the simulator should not be needed, as it always
will be a L1 hit. Same for other handlers.
It would be interesting to see the performance hit introduced by your patch.
Josef
> +
> + n->parent->Dr.a++;
> + n->parent->Dw.a++;
> +}
|
|
From: Vince W. <vi...@cs...> - 2009-04-18 19:26:14
|
It's interesting to hear about the interest in multi-core Cachegrind. I'm working on a somewhat similar project, where I am generating memory traces with Valgrind, but I'm planning on using the Ruby CMP memory simulator from Wisconsin's Multifacet group to do the actual simulation. The work is going a bit slowly though, as unfortunately not all source code trees out there are as clean and understandable as Valgrind's. On Fri, 17 Apr 2009, Josef Weidendorfer wrote: > If you do that, accesses can cover 3 cache lines, which the simulation code > is not prepared for. BTW, there are x86 instructions with large data read/writes, > e.g. pushf/popf. another example of large cache writes on x86 are the string instructions, for example rep stosb, etc. Actual hardware considers a rep stosb with ecx=4096 as one 4096 byte write, not as 4096 individual byte writes. So if you are ever comparing your resutls against hardware perf counters this is something to keep in mind. Also if you are monitoring icache behavior, that's counted as one single store instruction, not 4096 ones (I have special code in my valgrind tools that handles this correctly. I forget if cachegrind currently does this properly). Vince |
|
From: <sv...@va...> - 2009-04-18 16:09:05
|
Author: bart Date: 2009-04-18 17:09:01 +0100 (Sat, 18 Apr 2009) New Revision: 9576 Log: Added expected output for 32-bit systems. Added -64bit suffix for existing expected output file. Added: trunk/drd/tests/tc19_shadowmem.stderr.exp-32bit trunk/drd/tests/tc19_shadowmem.stderr.exp-64bit Removed: trunk/drd/tests/tc19_shadowmem.stderr.exp [... diff too large to include ...] |
|
From: <sv...@va...> - 2009-04-18 15:41:06
|
Author: bart
Date: 2009-04-18 16:40:54 +0100 (Sat, 18 Apr 2009)
New Revision: 9575
Log:
glibc-2.3 expected output updates.
Modified:
trunk/drd/tests/tc20_verifywrap.stderr.exp-glibc2.3
trunk/drd/tests/tc20_verifywrap2.stderr.exp-glibc2.3
Modified: trunk/drd/tests/tc20_verifywrap.stderr.exp-glibc2.3
===================================================================
--- trunk/drd/tests/tc20_verifywrap.stderr.exp-glibc2.3 2009-04-18 15:25:37 UTC (rev 9574)
+++ trunk/drd/tests/tc20_verifywrap.stderr.exp-glibc2.3 2009-04-18 15:40:54 UTC (rev 9575)
@@ -31,21 +31,7 @@
make pthread_mutex_lock fail: skipped on glibc < 2.4
-The object at address 0x........ is not a mutex.
- at 0x........: pthread_mutex_trylock (drd_pthread_intercepts.c:?)
- by 0x........: main (tc20_verifywrap.c:116)
-mutex 0x........ was first observed at:
- at 0x........: pthread_mutex_trylock (drd_pthread_intercepts.c:?)
- by 0x........: main (tc20_verifywrap.c:116)
-
-The object at address 0x........ is not a mutex.
- at 0x........: pthread_mutex_timedlock (drd_pthread_intercepts.c:?)
- by 0x........: main (tc20_verifywrap.c:121)
-mutex 0x........ was first observed at:
- at 0x........: pthread_mutex_trylock (drd_pthread_intercepts.c:?)
- by 0x........: main (tc20_verifywrap.c:116)
-
-The object at address 0x........ is not a mutex.
+Mutex not locked: mutex 0x........, recursion count 0, owner 0.
at 0x........: pthread_mutex_unlock (drd_pthread_intercepts.c:?)
by 0x........: main (tc20_verifywrap.c:125)
mutex 0x........ was first observed at:
@@ -145,4 +131,4 @@
at 0x........: pthread_mutex_init (drd_pthread_intercepts.c:?)
by 0x........: main (tc20_verifywrap.c:145)
-ERROR SUMMARY: 15 errors from 15 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 13 errors from 13 contexts (suppressed: 0 from 0)
Modified: trunk/drd/tests/tc20_verifywrap2.stderr.exp-glibc2.3
===================================================================
--- trunk/drd/tests/tc20_verifywrap2.stderr.exp-glibc2.3 2009-04-18 15:25:37 UTC (rev 9574)
+++ trunk/drd/tests/tc20_verifywrap2.stderr.exp-glibc2.3 2009-04-18 15:40:54 UTC (rev 9575)
@@ -29,30 +29,19 @@
Destroying locked mutex: mutex 0x........, recursion count 1, owner 1.
at 0x........: pthread_mutex_destroy (drd_pthread_intercepts.c:?)
by 0x........: main (tc20_verifywrap.c:102)
+mutex 0x........ was first observed at:
+ at 0x........: pthread_mutex_init (drd_pthread_intercepts.c:?)
+ by 0x........: main (tc20_verifywrap.c:100)
make pthread_mutex_lock fail: skipped on glibc < 2.4
-[1/1] pre_mutex_lock invalid mutex 0x........ rc 0 owner 0
+[1/1] pre_mutex_lock mutex 0x........ rc 0 owner 0
+[1/1] post_mutex_lock mutex 0x........ rc 0 owner 0 (locking failed)
+[1/1] mutex_trylock mutex 0x........ rc 0 owner 0
+[1/1] post_mutex_lock mutex 0x........ rc 0 owner 0 (locking failed)
+[1/1] mutex_unlock mutex 0x........ rc 0
-The object at address 0x........ is not a mutex.
- at 0x........: pthread_mutex_trylock (drd_pthread_intercepts.c:?)
- by 0x........: main (tc20_verifywrap.c:116)
-mutex 0x........ was first observed at:
- at 0x........: pthread_mutex_trylock (drd_pthread_intercepts.c:?)
- by 0x........: main (tc20_verifywrap.c:116)
-[1/1] post_mutex_lock invalid mutex 0x........ rc 0 owner 0 (locking failed)
-[1/1] mutex_trylock invalid mutex 0x........ rc 0 owner 0
-
-The object at address 0x........ is not a mutex.
- at 0x........: pthread_mutex_timedlock (drd_pthread_intercepts.c:?)
- by 0x........: main (tc20_verifywrap.c:121)
-mutex 0x........ was first observed at:
- at 0x........: pthread_mutex_trylock (drd_pthread_intercepts.c:?)
- by 0x........: main (tc20_verifywrap.c:116)
-[1/1] post_mutex_lock invalid mutex 0x........ rc 0 owner 0 (locking failed)
-[1/1] mutex_unlock invalid mutex 0x........ rc 0
-
-The object at address 0x........ is not a mutex.
+Mutex not locked: mutex 0x........, recursion count 0, owner 0.
at 0x........: pthread_mutex_unlock (drd_pthread_intercepts.c:?)
by 0x........: main (tc20_verifywrap.c:125)
mutex 0x........ was first observed at:
@@ -219,4 +208,4 @@
[1/1] post_mutex_lock recursive mutex 0x........ rc 0 owner 1
[1/1] mutex_unlock recursive mutex 0x........ rc 1
-ERROR SUMMARY: 15 errors from 15 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 13 errors from 13 contexts (suppressed: 0 from 0)
|
|
From: <sv...@va...> - 2009-04-18 15:25:55
|
Author: bart
Date: 2009-04-18 16:25:37 +0100 (Sat, 18 Apr 2009)
New Revision: 9574
Log:
More expected output updates as part of the fix for bug #187048. Should have been included in the previous commit.
Modified:
branches/VALGRIND_3_4_BRANCH/drd/tests/tc20_verifywrap.stderr.exp-glibc2.3
branches/VALGRIND_3_4_BRANCH/drd/tests/tc20_verifywrap2.stderr.exp-glibc2.3
Modified: branches/VALGRIND_3_4_BRANCH/drd/tests/tc20_verifywrap.stderr.exp-glibc2.3
===================================================================
--- branches/VALGRIND_3_4_BRANCH/drd/tests/tc20_verifywrap.stderr.exp-glibc2.3 2009-04-17 17:51:31 UTC (rev 9573)
+++ branches/VALGRIND_3_4_BRANCH/drd/tests/tc20_verifywrap.stderr.exp-glibc2.3 2009-04-18 15:25:37 UTC (rev 9574)
@@ -28,21 +28,7 @@
make pthread_mutex_lock fail: skipped on glibc < 2.4
-The object at address 0x........ is not a mutex.
- at 0x........: pthread_mutex_trylock (drd_pthread_intercepts.c:?)
- by 0x........: main (tc20_verifywrap.c:116)
-mutex 0x........ was first observed at:
- at 0x........: pthread_mutex_trylock (drd_pthread_intercepts.c:?)
- by 0x........: main (tc20_verifywrap.c:116)
-
-The object at address 0x........ is not a mutex.
- at 0x........: pthread_mutex_timedlock (drd_pthread_intercepts.c:?)
- by 0x........: main (tc20_verifywrap.c:121)
-mutex 0x........ was first observed at:
- at 0x........: pthread_mutex_trylock (drd_pthread_intercepts.c:?)
- by 0x........: main (tc20_verifywrap.c:116)
-
-The object at address 0x........ is not a mutex.
+Mutex not locked: mutex 0x........, recursion count 0, owner 0.
at 0x........: pthread_mutex_unlock (drd_pthread_intercepts.c:?)
by 0x........: main (tc20_verifywrap.c:125)
mutex 0x........ was first observed at:
@@ -136,4 +122,4 @@
Destroying locked mutex: mutex 0x........, recursion count 1, owner 1.
at 0x........: main (tc20_verifywrap.c:262)
-ERROR SUMMARY: 15 errors from 15 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 13 errors from 13 contexts (suppressed: 0 from 0)
Modified: branches/VALGRIND_3_4_BRANCH/drd/tests/tc20_verifywrap2.stderr.exp-glibc2.3
===================================================================
--- branches/VALGRIND_3_4_BRANCH/drd/tests/tc20_verifywrap2.stderr.exp-glibc2.3 2009-04-17 17:51:31 UTC (rev 9573)
+++ branches/VALGRIND_3_4_BRANCH/drd/tests/tc20_verifywrap2.stderr.exp-glibc2.3 2009-04-18 15:25:37 UTC (rev 9574)
@@ -32,27 +32,13 @@
make pthread_mutex_lock fail: skipped on glibc < 2.4
-[1/1] pre_mutex_lock invalid mutex 0x........ rc 0 owner 0
+[1/1] pre_mutex_lock mutex 0x........ rc 0 owner 0
+[1/1] post_mutex_lock mutex 0x........ rc 0 owner 0 (locking failed)
+[1/1] mutex_trylock mutex 0x........ rc 0 owner 0
+[1/1] post_mutex_lock mutex 0x........ rc 0 owner 0 (locking failed)
+[1/1] mutex_unlock mutex 0x........ rc 0
-The object at address 0x........ is not a mutex.
- at 0x........: pthread_mutex_trylock (drd_pthread_intercepts.c:?)
- by 0x........: main (tc20_verifywrap.c:116)
-mutex 0x........ was first observed at:
- at 0x........: pthread_mutex_trylock (drd_pthread_intercepts.c:?)
- by 0x........: main (tc20_verifywrap.c:116)
-[1/1] post_mutex_lock invalid mutex 0x........ rc 0 owner 0 (locking failed)
-[1/1] mutex_trylock invalid mutex 0x........ rc 0 owner 0
-
-The object at address 0x........ is not a mutex.
- at 0x........: pthread_mutex_timedlock (drd_pthread_intercepts.c:?)
- by 0x........: main (tc20_verifywrap.c:121)
-mutex 0x........ was first observed at:
- at 0x........: pthread_mutex_trylock (drd_pthread_intercepts.c:?)
- by 0x........: main (tc20_verifywrap.c:116)
-[1/1] post_mutex_lock invalid mutex 0x........ rc 0 owner 0 (locking failed)
-[1/1] mutex_unlock invalid mutex 0x........ rc 0
-
-The object at address 0x........ is not a mutex.
+Mutex not locked: mutex 0x........, recursion count 0, owner 0.
at 0x........: pthread_mutex_unlock (drd_pthread_intercepts.c:?)
by 0x........: main (tc20_verifywrap.c:125)
mutex 0x........ was first observed at:
@@ -213,4 +199,4 @@
[1/1] post_mutex_lock recursive mutex 0x........ rc 0 owner 1
[1/1] mutex_unlock recursive mutex 0x........ rc 1
-ERROR SUMMARY: 15 errors from 15 contexts (suppressed: 0 from 0)
+ERROR SUMMARY: 13 errors from 13 contexts (suppressed: 0 from 0)
|
|
From: Dominic A. <zer...@go...> - 2009-04-18 10:39:38
|
Hi, I have prepared a patch (see the attachment) and tested it against my simple "test.c" I had posted earlier. I also ran the CacheGrind-regression suit successfully. The patch has been done against the current svn-version. The patch should fix the way Dm-events are handled. Currently Dm-events are treated as if they were Dr-events which leads to less writes/Dw-events being recorded. This should lead to more or less inaccurate results depending on the instruction mix and frequency. If you look at the function "cg_fini" in "cg_main.c" which dumps the cache statistics you will see that "D_total" is derived from "Dw_total". Thus the number of total writes may be off as well as the miss-rates reported. I hope I did not introduce new bugs here. Please, review my patch. Ciao Dominic |
|
From: Dominic A. <zer...@go...> - 2009-04-18 10:29:45
|
Hi Josef,
Yes, indeed I have thought about influencing the scheduling in Valgrind.
This should be doable by granting the big lock only according to a custom
scheduler. This modifies the core however and will break compatibility.
Reducing the time slice was also on my mind. Simics has a similar feature
which allows switching between single instructions even. There is a
performance/accuracy trade-off obviously.
I have not thought much about simulated time but a straight forward
way would be to feed the instructions into a timing model.
Regarding the miss-rate. Everything which depends on "Dw_total" may
be off. Please, have a look at "cg_fini" :
D_total.a = Dr_total.a + Dw_total.a;
D_total.m1 = Dr_total.m1 + Dw_total.m1;
D_total.m2 = Dr_total.m2 + Dw_total.m2;
Ciao
Dominic
P.S.
Anyway, I am very glad CacheGrind exists. It is a very good starting point!
I have also prepared a patch.
|
|
From: Tom H. <th...@cy...> - 2009-04-18 03:04:01
|
Nightly build on lloyd ( x86_64, Fedora 7 ) started at 2009-04-18 03:05:08 BST Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... done Regression test results follow == 478 tests, 0 stderr failures, 0 stdout failures, 0 post failures == ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... done Regression test results follow == 477 tests, 0 stderr failures, 0 stdout failures, 0 post failures == ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Sat Apr 18 03:34:55 2009 --- new.short Sat Apr 18 04:03:44 2009 *************** *** 8,10 **** ! == 477 tests, 0 stderr failures, 0 stdout failures, 0 post failures == --- 8,10 ---- ! == 478 tests, 0 stderr failures, 0 stdout failures, 0 post failures == |
|
From: Tom H. <th...@cy...> - 2009-04-18 02:47:15
|
Nightly build on mg ( x86_64, Fedora 9 ) started at 2009-04-18 03:10:05 BST Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 484 tests, 0 stderr failures, 1 stdout failure, 0 post failures == none/tests/linux/mremap2 (stdout) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 483 tests, 0 stderr failures, 1 stdout failure, 0 post failures == none/tests/linux/mremap2 (stdout) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Sat Apr 18 03:28:32 2009 --- new.short Sat Apr 18 03:47:00 2009 *************** *** 8,10 **** ! == 483 tests, 0 stderr failures, 1 stdout failure, 0 post failures == none/tests/linux/mremap2 (stdout) --- 8,10 ---- ! == 484 tests, 0 stderr failures, 1 stdout failure, 0 post failures == none/tests/linux/mremap2 (stdout) |
|
From: Nicholas N. <n.n...@gm...> - 2009-04-18 00:35:36
|
On Sat, Apr 18, 2009 at 8:15 AM, Greg Parker <gp...@ap...> wrote: > >>> ../tests/sys_mman.h:25: error: 'MAP_ANONYMOUS' undeclared (first use in >>> this >>> function) > > That's spelled MAP_ANON on Mac OS X. (Also, make sure you set either > MAP_PRIVATE or MAP_SHARED; if you set neither or both, mmap() may return an > error.) Earlier in the file is a conditional #define that defines MAP_ANONYMOUS as equal to MAP_ANON on Darwin, so the tests can be uniform. I think the problem was that Filipe hadn't rerun autogen.sh recently. Nick |