You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
|
|
1
(9) |
|
2
(12) |
3
(19) |
4
(18) |
5
(22) |
6
(25) |
7
(18) |
8
(24) |
|
9
(16) |
10
(15) |
11
(22) |
12
(7) |
13
(19) |
14
(5) |
15
(8) |
|
16
(7) |
17
(8) |
18
(9) |
19
(7) |
20
(13) |
21
(16) |
22
(7) |
|
23
(10) |
24
(8) |
25
(4) |
26
(4) |
27
(9) |
28
(4) |
29
(5) |
|
30
(8) |
31
(4) |
|
|
|
|
|
|
From: Aluvala Suman#S. <alu...@ho...> - 2007-12-18 09:44:01
|
I just joined Shelfari to connect with other book lovers. Come see the books= I love and see if we have any in common. Then pick my next book so I can= keep on reading.=0A=0AClick below to join my group of friends on Shelfari!= =0A=0Ahttp://www.shelfari.com/Register.aspx?ActivityId=3D58994581&InvitationCode=3D91e1c9f8-aa84-4688-8db4-264cc17ff982= =0A=0AAluvala Suman#SS185=0A=0AShelfari is a free site that lets you share= book ratings and reviews with friends and meet people who have similar= tastes in books. It also lets you build an online bookshelf, join book= clubs, and get good book recommendations from friends. You should check= it out.=0A=0A--------=0A=0AYou have received this email because Aluvala= Suman#SS185 (alu...@ho...) directly invited you to join his/her= community on Shelfari.=0A=0AIt is against Shelfari's policies to invite= people who you don't know directly. Follow this link (http://www.shelfari.com/actions/emailoptout.aspx?email=3Dv...@li...&activityid=3D58994581)= to prevent future invitations to this address. If you believe you do not= know this person, you may view (http://www.shelfari.com/o1517612004) his/her= Shelfari page or report him/her in our feedback (http://www.shelfari.com/Feedback.aspx)= section.=0A=0AShelfari, 616 1st Ave #300, Seattle, WA 98104=0A |
|
From: Konstantin S. <kon...@gm...> - 2007-12-18 06:39:10
|
>> I guess I am not following this ( as I am very new to helgrind and valgrind ) . I am using ACE based Queues and Tasks ( http://www.cs.wustl.edu/~schmidt/ACE-overview.html<http://www.cs.wustl.edu/%7Eschmidt/ACE-overview.html> ) >> and ACE already has Queue protection locks for multithread implementation ( over pthred for Linux and Windows API based threads for windows OS) . Helgrind does not support this out of the box. >> Are you suggesting I need to change ACE Queue put and get calls ? This will work, at least it works for me (I have message queue somewhat similar to ACE Queue). >> or do I need to change something in helgrind source code ? Doing this (and not changing ACE) might be challenging, though probably not impossible. I hope valgrind developers will forgive me for jumping into this discussion before them. They might have much simpler solution. :) --kcc On Dec 18, 2007 1:08 AM, Sahni, Jitan <jit...@cr...> wrote: > I guess I am not following this ( as I am very new to helgrind and > valgrind ) . I am using ACE based Queues and Tasks ( > http://www.cs.wustl.edu/~schmidt/ACE-overview.html<http://www.cs.wustl.edu/%7Eschmidt/ACE-overview.html> > ) > and ACE already has Queue protection locks for multithread implementation > ( over pthred for Linux and Windows API based threads for windows OS) . > > Are you suggesting I need to change ACE Queue put and get calls ? or do I > need to change something in helgrind source code ? > I read the helgrind documentation and I don't understand how helgrind will > know when a message passed into the queue does or does NOT enter a race > condition ? > especially when all threads are created at start ... > > Thanks > Jitan > > > ------------------------------ > *From:* Konstantin Serebryany [mailto:kon...@gm...] > *Sent:* Monday, December 17, 2007 4:19 PM > *To:* Sahni, Jitan > *Cc:* val...@li...; > val...@li... > *Subject:* Re: [Valgrind-users] helgrind false alters in ACE Tasks ( > Linux) > > I have the same issue with our own message queues. > Helgrind does not support them out of the box but can be easily enhanced. > > I did it with the help of source code changes: > > void Put(T elem) { // put() method of your queue > sem_t *uniq_sem = new sem_t; > sem_init(uniq_sem); > sem_post(uniq_sem); > // now do the actual 'put' stuff, putting uniq_sem together with elem > } > > T Get() { // get() method of your queue > // do the actual 'get' stuff; get uniq_sem from queue together with elem > sem_wait(uniq_sem); > sem_destroy(uniq_sem); > delete uniq_sem; > return elem; > } > > The good thing is that you don't really need to create a real semaphore -- > just create a integer with unique value (I use atomic_increment of static > var) and pass it to appropriate helgrind's user requests (see helgrind.hand hg_intercepts.c). > > The bad thing is that it might be challenging to achieve the same effect > without source code changes. > It is not enough to intercept the call to Put/Get routines -- you need to > put something into the queue (at least, I did not find another way). > > > --kcc > > > > On Dec 17, 2007 10:23 PM, Sahni, Jitan <jit...@cr...> > wrote: > > > > > does helgrind work properly on ACE Tasks and Message Queues ? I am > > getting some helgrind race alerts when using ACE Tasks and Message Queues. > > Basically the parent and child thread are created at start of the program > > and then parent passes buffer of data to child thru ACE Message Queue . the > > waiting child picks the data , uses it and frees it . helgrind is alerting > > me on race condition on this model . Does helgrind work correctly on this > > model of multi threads ( tasks and Queues ) ? > > > > Also is there any way to print memory contents of the location printed > > in a race condition ? > > > > > > > > ============================================================================== > > Please access the attached hyperlink for an important electronic communications disclaimer: > > http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html > > ============================================================================== > > > > > > > > ------------------------------------------------------------------------- > > SF.Net email is sponsored by: > > Check out the new SourceForge.net Marketplace. > > It's the best place to buy or sell services > > for just about anything Open Source. > > > > http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace > > _______________________________________________ > > Valgrind-users mailing list > > Val...@li... > > https://lists.sourceforge.net/lists/listinfo/valgrind-users > > > > > ============================================================================== > Please access the attached hyperlink for an important electronic communications disclaimer: > http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html > ============================================================================== > > |
|
From: Tom H. <th...@cy...> - 2007-12-18 04:00:21
|
Nightly build on alvis ( i686, Red Hat 7.3 ) started at 2007-12-18 03:15:03 GMT Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 321 tests, 64 stderr failures, 1 stdout failure, 28 post failures == memcheck/tests/addressable (stderr) memcheck/tests/badjump (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/erringfds (stderr) memcheck/tests/leak-0 (stderr) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-pool-0 (stderr) memcheck/tests/leak-pool-1 (stderr) memcheck/tests/leak-pool-2 (stderr) memcheck/tests/leak-pool-3 (stderr) memcheck/tests/leak-pool-4 (stderr) memcheck/tests/leak-pool-5 (stderr) memcheck/tests/leak-regroot (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/long_namespace_xml (stderr) memcheck/tests/malloc_free_fill (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/noisy_child (stderr) memcheck/tests/partial_load_dflt (stderr) memcheck/tests/partial_load_ok (stderr) memcheck/tests/partiallydefinedeq (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/sigkill (stderr) memcheck/tests/stack_changes (stderr) memcheck/tests/supp_unknown (stderr) memcheck/tests/x86/bug152022 (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) memcheck/tests/x86/xor-undef-x86 (stderr) memcheck/tests/xml1 (stderr) massif/tests/alloc-fns-A (post) massif/tests/alloc-fns-B (post) massif/tests/basic (post) massif/tests/basic2 (post) massif/tests/big-alloc (post) massif/tests/culling1 (stderr) massif/tests/culling2 (stderr) massif/tests/custom_alloc (post) massif/tests/deep-A (post) massif/tests/deep-B (stderr) massif/tests/deep-B (post) massif/tests/deep-C (stderr) massif/tests/deep-C (post) massif/tests/deep-D (post) massif/tests/ignoring (post) massif/tests/insig (post) massif/tests/long-time (post) massif/tests/new-cpp (post) massif/tests/null (post) massif/tests/one (post) massif/tests/overloaded-new (post) massif/tests/peak (post) massif/tests/peak2 (stderr) massif/tests/peak2 (post) massif/tests/realloc (stderr) massif/tests/realloc (post) massif/tests/thresholds_0_0 (post) massif/tests/thresholds_0_10 (post) massif/tests/thresholds_10_0 (post) massif/tests/thresholds_10_10 (post) massif/tests/thresholds_5_0 (post) massif/tests/thresholds_5_10 (post) massif/tests/zero1 (post) massif/tests/zero2 (post) none/tests/blockfault (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/hg01_all_ok (stderr) helgrind/tests/hg02_deadlock (stderr) helgrind/tests/hg03_inherit (stderr) helgrind/tests/hg04_race (stderr) helgrind/tests/hg05_race2 (stderr) helgrind/tests/hg06_readshared (stderr) helgrind/tests/tc01_simple_race (stderr) helgrind/tests/tc02_simple_tls (stderr) helgrind/tests/tc03_re_excl (stderr) helgrind/tests/tc05_simple_race (stderr) helgrind/tests/tc06_two_races (stderr) helgrind/tests/tc07_hbl1 (stderr) helgrind/tests/tc08_hbl2 (stderr) helgrind/tests/tc09_bad_unlock (stderr) helgrind/tests/tc11_XCHG (stderr) helgrind/tests/tc12_rwl_trivial (stderr) helgrind/tests/tc14_laog_dinphils (stderr) helgrind/tests/tc16_byterace (stderr) helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc19_shadowmem (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) helgrind/tests/tc23_bogus_condwait (stderr) helgrind/tests/tc24_nonzero_sem (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 321 tests, 62 stderr failures, 1 stdout failure, 28 post failures == memcheck/tests/addressable (stderr) memcheck/tests/badjump (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/erringfds (stderr) memcheck/tests/leak-0 (stderr) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-pool-0 (stderr) memcheck/tests/leak-pool-1 (stderr) memcheck/tests/leak-pool-2 (stderr) memcheck/tests/leak-pool-3 (stderr) memcheck/tests/leak-pool-4 (stderr) memcheck/tests/leak-pool-5 (stderr) memcheck/tests/leak-regroot (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/long_namespace_xml (stderr) memcheck/tests/malloc_free_fill (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/noisy_child (stderr) memcheck/tests/partial_load_dflt (stderr) memcheck/tests/partial_load_ok (stderr) memcheck/tests/partiallydefinedeq (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/sigkill (stderr) memcheck/tests/stack_changes (stderr) memcheck/tests/x86/bug152022 (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) memcheck/tests/x86/xor-undef-x86 (stderr) memcheck/tests/xml1 (stderr) massif/tests/alloc-fns-A (post) massif/tests/alloc-fns-B (post) massif/tests/basic (post) massif/tests/basic2 (post) massif/tests/big-alloc (post) massif/tests/culling1 (stderr) massif/tests/culling2 (stderr) massif/tests/custom_alloc (post) massif/tests/deep-A (post) massif/tests/deep-B (stderr) massif/tests/deep-B (post) massif/tests/deep-C (stderr) massif/tests/deep-C (post) massif/tests/deep-D (post) massif/tests/ignoring (post) massif/tests/insig (post) massif/tests/long-time (post) massif/tests/new-cpp (post) massif/tests/null (post) massif/tests/one (post) massif/tests/overloaded-new (post) massif/tests/peak (post) massif/tests/peak2 (stderr) massif/tests/peak2 (post) massif/tests/realloc (stderr) massif/tests/realloc (post) massif/tests/thresholds_0_0 (post) massif/tests/thresholds_0_10 (post) massif/tests/thresholds_10_0 (post) massif/tests/thresholds_10_10 (post) massif/tests/thresholds_5_0 (post) massif/tests/thresholds_5_10 (post) massif/tests/zero1 (post) massif/tests/zero2 (post) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/hg01_all_ok (stderr) helgrind/tests/hg02_deadlock (stderr) helgrind/tests/hg03_inherit (stderr) helgrind/tests/hg04_race (stderr) helgrind/tests/hg05_race2 (stderr) helgrind/tests/hg06_readshared (stderr) helgrind/tests/tc01_simple_race (stderr) helgrind/tests/tc02_simple_tls (stderr) helgrind/tests/tc03_re_excl (stderr) helgrind/tests/tc05_simple_race (stderr) helgrind/tests/tc06_two_races (stderr) helgrind/tests/tc07_hbl1 (stderr) helgrind/tests/tc08_hbl2 (stderr) helgrind/tests/tc09_bad_unlock (stderr) helgrind/tests/tc11_XCHG (stderr) helgrind/tests/tc12_rwl_trivial (stderr) helgrind/tests/tc14_laog_dinphils (stderr) helgrind/tests/tc16_byterace (stderr) helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc19_shadowmem (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) helgrind/tests/tc23_bogus_condwait (stderr) helgrind/tests/tc24_nonzero_sem (stderr) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Tue Dec 18 03:42:39 2007 --- new.short Tue Dec 18 04:00:23 2007 *************** *** 8,10 **** ! == 321 tests, 62 stderr failures, 1 stdout failure, 28 post failures == memcheck/tests/addressable (stderr) --- 8,10 ---- ! == 321 tests, 64 stderr failures, 1 stdout failure, 28 post failures == memcheck/tests/addressable (stderr) *************** *** 33,34 **** --- 33,35 ---- memcheck/tests/stack_changes (stderr) + memcheck/tests/supp_unknown (stderr) memcheck/tests/x86/bug152022 (stderr) *************** *** 72,73 **** --- 73,75 ---- massif/tests/zero2 (post) + none/tests/blockfault (stderr) none/tests/mremap (stderr) |
|
From: Tom H. <th...@cy...> - 2007-12-18 03:37:23
|
Nightly build on lloyd ( x86_64, Fedora 7 ) started at 2007-12-18 03:05:09 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 355 tests, 7 stderr failures, 2 stdout failures, 0 post failures == memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc22_exit_w_lock (stderr) |
|
From: Tom H. <th...@cy...> - 2007-12-18 03:27:50
|
Nightly build on dellow ( x86_64, Fedora 8 ) started at 2007-12-18 03:10:05 GMT Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 355 tests, 8 stderr failures, 3 stdout failures, 0 post failures == memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) none/tests/pth_detached (stdout) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc22_exit_w_lock (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 355 tests, 8 stderr failures, 2 stdout failures, 0 post failures == memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc22_exit_w_lock (stderr) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Tue Dec 18 03:19:29 2007 --- new.short Tue Dec 18 03:27:52 2007 *************** *** 8,10 **** ! == 355 tests, 8 stderr failures, 2 stdout failures, 0 post failures == memcheck/tests/malloc_free_fill (stderr) --- 8,10 ---- ! == 355 tests, 8 stderr failures, 3 stdout failures, 0 post failures == memcheck/tests/malloc_free_fill (stderr) *************** *** 16,17 **** --- 16,18 ---- none/tests/mremap2 (stdout) + none/tests/pth_detached (stdout) helgrind/tests/tc18_semabuse (stderr) |
|
From: Tom H. <th...@cy...> - 2007-12-18 03:14:18
|
Nightly build on gill ( x86_64, Fedora Core 2 ) started at 2007-12-18 03:00:03 GMT Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 357 tests, 30 stderr failures, 1 stdout failure, 0 post failures == memcheck/tests/addressable (stderr) memcheck/tests/badjump (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/malloc_free_fill (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/supp_unknown (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) none/tests/blockfault (stderr) none/tests/fdleak_fcntl (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/hg01_all_ok (stderr) helgrind/tests/hg02_deadlock (stderr) helgrind/tests/hg03_inherit (stderr) helgrind/tests/hg04_race (stderr) helgrind/tests/hg05_race2 (stderr) helgrind/tests/tc01_simple_race (stderr) helgrind/tests/tc05_simple_race (stderr) helgrind/tests/tc06_two_races (stderr) helgrind/tests/tc09_bad_unlock (stderr) helgrind/tests/tc14_laog_dinphils (stderr) helgrind/tests/tc16_byterace (stderr) helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc19_shadowmem (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) helgrind/tests/tc23_bogus_condwait (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 357 tests, 24 stderr failures, 1 stdout failure, 0 post failures == memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) none/tests/fdleak_fcntl (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/hg01_all_ok (stderr) helgrind/tests/hg02_deadlock (stderr) helgrind/tests/hg03_inherit (stderr) helgrind/tests/hg04_race (stderr) helgrind/tests/hg05_race2 (stderr) helgrind/tests/tc01_simple_race (stderr) helgrind/tests/tc05_simple_race (stderr) helgrind/tests/tc06_two_races (stderr) helgrind/tests/tc09_bad_unlock (stderr) helgrind/tests/tc14_laog_dinphils (stderr) helgrind/tests/tc16_byterace (stderr) helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc19_shadowmem (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) helgrind/tests/tc23_bogus_condwait (stderr) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Tue Dec 18 03:07:05 2007 --- new.short Tue Dec 18 03:14:20 2007 *************** *** 8,15 **** ! == 357 tests, 24 stderr failures, 1 stdout failure, 0 post failures == memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) none/tests/fdleak_fcntl (stderr) --- 8,21 ---- ! == 357 tests, 30 stderr failures, 1 stdout failure, 0 post failures == ! memcheck/tests/addressable (stderr) ! memcheck/tests/badjump (stderr) ! memcheck/tests/describe-block (stderr) memcheck/tests/malloc_free_fill (stderr) + memcheck/tests/match-overrun (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) + memcheck/tests/supp_unknown (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) + none/tests/blockfault (stderr) none/tests/fdleak_fcntl (stderr) |
|
From: <sv...@va...> - 2007-12-18 01:49:34
|
Author: sewardj
Date: 2007-12-18 01:49:23 +0000 (Tue, 18 Dec 2007)
New Revision: 7302
Log:
Improve handling of programs which require very large main thread
stacks. Instead of hardwiring the main thread stack to a max of 16MB
and segfaulting the app beyond that point, allow the user to specify
the main stack size using the new flag --main-stacksize=<number>.
If said flag is not present, the current default, which is "MIN(16GB,
current ulimit -s value)", is used.
Modified:
trunk/coregrind/m_initimg/initimg-linux.c
trunk/coregrind/m_main.c
trunk/coregrind/m_options.c
trunk/coregrind/m_scheduler/scheduler.c
trunk/coregrind/m_signals.c
trunk/coregrind/pub_core_options.h
trunk/docs/xml/manual-core.xml
trunk/memcheck/tests/addressable.stderr.exp
trunk/memcheck/tests/badjump.stderr.exp
trunk/memcheck/tests/describe-block.stderr.exp
trunk/memcheck/tests/match-overrun.stderr.exp
trunk/memcheck/tests/supp_unknown.stderr.exp
trunk/none/tests/blockfault.stderr.exp
trunk/none/tests/cmdline1.stdout.exp
trunk/none/tests/cmdline2.stdout.exp
Modified: trunk/coregrind/m_initimg/initimg-linux.c
===================================================================
--- trunk/coregrind/m_initimg/initimg-linux.c 2007-12-15 23:08:35 UTC (rev 7301)
+++ trunk/coregrind/m_initimg/initimg-linux.c 2007-12-18 01:49:23 UTC (rev 7302)
@@ -593,7 +593,7 @@
VG_(printf)("valgrind: "
"I failed to allocate space for the application's stack.\n");
VG_(printf)("valgrind: "
- "This may be the result of a very large --max-stackframe=\n");
+ "This may be the result of a very large --main-stacksize=\n");
VG_(printf)("valgrind: setting. Cannot continue. Sorry.\n\n");
VG_(exit)(0);
}
@@ -874,25 +874,28 @@
//--------------------------------------------------------------
{
/* When allocating space for the client stack on Linux, take
- notice of the --max-stackframe value. This makes it possible
+ notice of the --main-stacksize value. This makes it possible
to run programs with very large (primary) stack requirements
- simply by specifying --max-stackframe. */
+ simply by specifying --main-stacksize. */
+ /* Logic is as follows:
+ - by default, use the client's current stack rlimit
+ - if that exceeds 16M, clamp to 16M
+ - if a larger --main-stacksize value is specified, use that instead
+ - in all situations, the minimum allowed stack size is 1M
+ */
void* init_sp = iicii.argv - 1;
SizeT m1 = 1024 * 1024;
SizeT m16 = 16 * m1;
- SizeT msf = VG_(clo_max_stackframe) + m1;
- VG_(debugLog)(1, "initimg", "Setup client stack\n");
- /* For the max stack size, use the client's stack rlimit, but
- clamp it to between 1M and 16M. */
- iifii.clstack_max_size = (SizeT)VG_(client_rlimit_stack).rlim_cur;
- if (iifii.clstack_max_size < m1) iifii.clstack_max_size = m1;
- if (iifii.clstack_max_size > m16) iifii.clstack_max_size = m16;
- /* However, if --max-stackframe= is specified, and the given
- value (+ 1 M for spare) exceeds the current setting, use the
- max-stackframe input instead. */
+ SizeT szB = (SizeT)VG_(client_rlimit_stack).rlim_cur;
+ if (szB < m1) szB = m1;
+ if (szB > m16) szB = m16;
+ if (VG_(clo_main_stacksize) > 0) szB = VG_(clo_main_stacksize);
+ if (szB < m1) szB = m1;
+ szB = VG_PGROUNDUP(szB);
+ VG_(debugLog)(1, "initimg",
+ "Setup client stack: size will be %ld\n", szB);
- if (iifii.clstack_max_size < msf) iifii.clstack_max_size = msf;
- iifii.clstack_max_size = VG_PGROUNDUP(iifii.clstack_max_size);
+ iifii.clstack_max_size = szB;
iifii.initial_client_SP
= setup_client_stack( init_sp, env,
Modified: trunk/coregrind/m_main.c
===================================================================
--- trunk/coregrind/m_main.c 2007-12-15 23:08:35 UTC (rev 7301)
+++ trunk/coregrind/m_main.c 2007-12-18 01:49:23 UTC (rev 7302)
@@ -149,6 +149,8 @@
" --input-fd=<number> file descriptor for input [0=stdin]\n"
" --max-stackframe=<number> assume stack switch for SP changes larger\n"
" than <number> bytes [2000000]\n"
+" --main-stacksize=<number> set size of main thread's stack (in bytes)\n"
+" [use current 'ulimit' value]\n"
"\n";
Char* usage2 =
@@ -243,11 +245,22 @@
}
-/* Peer at previously set up VG_(args_for_valgrind) and extract any
- request for help and also the tool name, and also set up
- VG_(clo_max_stackframe). */
+/* Peer at previously set up VG_(args_for_valgrind) and do some
+ minimal command line processing that must happen early on:
-static void get_helprequest_and_toolname ( Int* need_help, HChar** tool )
+ - show the version string, if requested (-v)
+ - extract any request for help (--help, -h, --help-debug)
+ - get the toolname (--tool=)
+ - set VG_(clo_max_stackframe) (--max-stackframe=)
+ - set VG_(clo_main_stacksize) (--main-stacksize=)
+
+ That's all it does. The main command line processing is done below
+ by main_process_cmd_line_options. Note that
+ main_process_cmd_line_options has to handle but ignore the ones we
+ have handled here.
+*/
+static void early_process_cmd_line_options ( /*OUT*/Int* need_help,
+ /*OUT*/HChar** tool )
{
UInt i;
HChar* str;
@@ -278,16 +291,21 @@
} else if (VG_CLO_STREQN(7, str, "--tool=")) {
*tool = &str[7];
- // Set up VG_(clo_max_stackframe). This is needed by
- // VG_(ii_create_image), which happens before
- // process_command_line_options().
- } else VG_NUM_CLO (str, "--max-stackframe",
- VG_(clo_max_stackframe));
+ // Set up VG_(clo_max_stackframe) and VG_(clo_main_stacksize).
+ // These are needed by VG_(ii_create_image), which happens
+ // before main_process_cmd_line_options().
+ }
+ else VG_NUM_CLO(str, "--max-stackframe", VG_(clo_max_stackframe))
+ else VG_NUM_CLO(str, "--main-stacksize", VG_(clo_main_stacksize));
}
}
-static Bool process_cmd_line_options( UInt* client_auxv, const char* toolname )
+/* The main processing for command line options. See comments above
+ on early_process_cmd_line_options.
+*/
+static Bool main_process_cmd_line_options( UInt* client_auxv,
+ const HChar* toolname )
{
// VG_(clo_log_fd) is used by all the messaging. It starts as 2 (stderr)
// and we cannot change it until we know what we are changing it to is
@@ -375,10 +393,13 @@
else VG_BOOL_CLO(arg, "--error-limit", VG_(clo_error_limit))
else VG_NUM_CLO (arg, "--error-exitcode", VG_(clo_error_exitcode))
else VG_BOOL_CLO(arg, "--show-emwarns", VG_(clo_show_emwarns))
- /* Already done in get_helprequest_and_toolname, but we need to
- redundantly handle it again, so the flag does not get
- rejected as invalid. */
+
+ /* The next two are already done in
+ early_process_cmd_line_options, but we need to redundantly
+ handle them again, so they do not get rejected as invalid. */
else VG_NUM_CLO (arg, "--max-stackframe", VG_(clo_max_stackframe))
+ else VG_NUM_CLO (arg, "--main-stacksize", VG_(clo_main_stacksize))
+
else VG_BOOL_CLO(arg, "--run-libc-freeres", VG_(clo_run_libc_freeres))
else VG_BOOL_CLO(arg, "--show-below-main", VG_(clo_show_below_main))
else VG_BOOL_CLO(arg, "--time-stamp", VG_(clo_time_stamp))
@@ -1404,20 +1425,21 @@
// because the tool has not been initialised.
// p: split_up_argv [for VG_(args_for_valgrind)]
//--------------------------------------------------------------
- VG_(debugLog)(1, "main", "Preprocess command line opts\n");
- get_helprequest_and_toolname(&need_help, &toolname);
+ VG_(debugLog)(1, "main",
+ "(early_) Process Valgrind's command line options\n");
+ early_process_cmd_line_options(&need_help, &toolname);
// Set default vex control params
LibVEX_default_VexControl(& VG_(clo_vex_control));
//--------------------------------------------------------------
// Load client executable, finding in $PATH if necessary
- // p: get_helprequest_and_toolname() [for 'exec', 'need_help']
- // p: layout_remaining_space [so there's space]
+ // p: early_process_cmd_line_options() [for 'exec', 'need_help']
+ // p: layout_remaining_space [so there's space]
//
// Set up client's environment
- // p: set-libdir [for VG_(libdir)]
- // p: get_helprequest_and_toolname [for toolname]
+ // p: set-libdir [for VG_(libdir)]
+ // p: early_process_cmd_line_options [for toolname]
//
// Setup client stack, eip, and VG_(client_arg[cv])
// p: load_client() [for 'info']
@@ -1544,8 +1566,8 @@
//--------------------------------------------------------------
// If --tool and --help/--help-debug was given, now give the core+tool
// help message
- // p: get_helprequest_and_toolname() [for 'need_help']
- // p: tl_pre_clo_init [for 'VG_(tdict).usage']
+ // p: early_process_cmd_line_options() [for 'need_help']
+ // p: tl_pre_clo_init [for 'VG_(tdict).usage']
//--------------------------------------------------------------
VG_(debugLog)(1, "main", "Print help and quit, if requested\n");
if (need_help) {
@@ -1557,9 +1579,10 @@
// p: setup_client_stack() [for 'VG_(client_arg[cv]']
// p: setup_file_descriptors() [for 'VG_(fd_xxx_limit)']
//--------------------------------------------------------------
- VG_(debugLog)(1, "main", "Process Valgrind's command line options, "
- "setup logging\n");
- logging_to_fd = process_cmd_line_options(client_auxv, toolname);
+ VG_(debugLog)(1, "main",
+ "(main_) Process Valgrind's command line options, "
+ "setup logging\n");
+ logging_to_fd = main_process_cmd_line_options(client_auxv, toolname);
//--------------------------------------------------------------
// Zeroise the millisecond counter by doing a first read of it.
@@ -1570,8 +1593,9 @@
//--------------------------------------------------------------
// Print the preamble
// p: tl_pre_clo_init [for 'VG_(details).name' and friends]
- // p: process_cmd_line_options() [for VG_(clo_verbosity), VG_(clo_xml),
- // logging_to_fd]
+ // p: main_process_cmd_line_options() [for VG_(clo_verbosity),
+ // VG_(clo_xml),
+ // logging_to_fd]
//--------------------------------------------------------------
VG_(debugLog)(1, "main", "Print the preamble...\n");
print_preamble(logging_to_fd, toolname);
@@ -1605,7 +1629,7 @@
//--------------------------------------------------------------
// Allow GDB attach
- // p: process_cmd_line_options() [for VG_(clo_wait_for_gdb)]
+ // p: main_process_cmd_line_options() [for VG_(clo_wait_for_gdb)]
//--------------------------------------------------------------
/* Hook to delay things long enough so we can get the pid and
attach GDB in another shell. */
@@ -1634,7 +1658,7 @@
//--------------------------------------------------------------
// Search for file descriptors that are inherited from our parent
- // p: process_cmd_line_options [for VG_(clo_track_fds)]
+ // p: main_process_cmd_line_options [for VG_(clo_track_fds)]
//--------------------------------------------------------------
if (VG_(clo_track_fds)) {
VG_(debugLog)(1, "main", "Init preopened fds\n");
@@ -1833,7 +1857,7 @@
//--------------------------------------------------------------
// Read suppression file
- // p: process_cmd_line_options() [for VG_(clo_suppressions)]
+ // p: main_process_cmd_line_options() [for VG_(clo_suppressions)]
//--------------------------------------------------------------
if (VG_(needs).core_errors || VG_(needs).tool_errors) {
VG_(debugLog)(1, "main", "Load suppressions\n");
Modified: trunk/coregrind/m_options.c
===================================================================
--- trunk/coregrind/m_options.c 2007-12-15 23:08:35 UTC (rev 7301)
+++ trunk/coregrind/m_options.c 2007-12-18 01:49:23 UTC (rev 7302)
@@ -85,6 +85,7 @@
Bool VG_(clo_show_below_main)= False;
Bool VG_(clo_show_emwarns) = False;
Word VG_(clo_max_stackframe) = 2000000;
+Word VG_(clo_main_stacksize) = 0; /* use client's rlimit.stack */
Bool VG_(clo_wait_for_gdb) = False;
VgSmc VG_(clo_smc_check) = Vg_SmcStack;
HChar* VG_(clo_kernel_variant) = NULL;
Modified: trunk/coregrind/m_scheduler/scheduler.c
===================================================================
--- trunk/coregrind/m_scheduler/scheduler.c 2007-12-15 23:08:35 UTC (rev 7301)
+++ trunk/coregrind/m_scheduler/scheduler.c 2007-12-18 01:49:23 UTC (rev 7302)
@@ -466,6 +466,14 @@
tid_main = VG_(alloc_ThreadState)();
+ /* Bleh. Unfortunately there are various places in the system that
+ assume that the main thread has a ThreadId of 1.
+ - Helgrind (possibly)
+ - stack overflow message in default_action() in m_signals.c
+ - definitely a lot more places
+ */
+ vg_assert(tid_main == 1);
+
return tid_main;
}
Modified: trunk/coregrind/m_signals.c
===================================================================
--- trunk/coregrind/m_signals.c 2007-12-15 23:08:35 UTC (rev 7301)
+++ trunk/coregrind/m_signals.c 2007-12-18 01:49:23 UTC (rev 7302)
@@ -1289,6 +1289,23 @@
if (tid != VG_INVALID_THREADID) {
VG_(get_and_pp_StackTrace)(tid, VG_(clo_backtrace_size));
}
+
+ if (sigNo == VKI_SIGSEGV
+ && info && info->si_code > VKI_SI_USER
+ && info->si_code == VKI_SEGV_MAPERR) {
+ VG_(message)(Vg_UserMsg, " If you believe this happened as a "
+ "result of a stack overflow in your");
+ VG_(message)(Vg_UserMsg, " program's main thread (unlikely but"
+ " possible), you can try to increase");
+ VG_(message)(Vg_UserMsg, " the size of the main thread stack"
+ " using the --main-stacksize= flag.");
+ // FIXME: assumes main ThreadId == 1
+ if (VG_(is_valid_tid)(1)) {
+ VG_(message)(Vg_UserMsg,
+ " The main thread stack size used in this run was %d.",
+ (Int)VG_(threads)[1].client_stack_szB);
+ }
+ }
}
if (VG_(is_action_requested)( "Attach to debugger", & VG_(clo_db_attach) )) {
Modified: trunk/coregrind/pub_core_options.h
===================================================================
--- trunk/coregrind/pub_core_options.h 2007-12-15 23:08:35 UTC (rev 7301)
+++ trunk/coregrind/pub_core_options.h 2007-12-18 01:49:23 UTC (rev 7302)
@@ -157,6 +157,9 @@
consider a stack switch to have happened? Default: 2000000 bytes
NB: must be host-word-sized to be correct (hence Word). */
extern Word VG_(clo_max_stackframe);
+/* How large should Valgrind allow the primary thread's guest stack to
+ be? */
+extern Word VG_(clo_main_stacksize);
/* Delay startup to allow GDB to be attached? Default: NO */
extern Bool VG_(clo_wait_for_gdb);
Modified: trunk/docs/xml/manual-core.xml
===================================================================
--- trunk/docs/xml/manual-core.xml 2007-12-15 23:08:35 UTC (rev 7301)
+++ trunk/docs/xml/manual-core.xml 2007-12-18 01:49:23 UTC (rev 7302)
@@ -1046,6 +1046,60 @@
</listitem>
</varlistentry>
+ <varlistentry id="opt.main-stacksize" xreflabel="--main-stacksize">
+ <term>
+ <option><![CDATA[--main-stacksize=<number>
+ [default: use current 'ulimit' value] ]]></option>
+ </term>
+ <listitem>
+ <para>Specifies the size of the main thread's stack.</para>
+
+ <para>To simplify its memory management, Valgrind reserves all
+ required space for the main thread's stack at startup. That
+ means it needs to know the required stack size at
+ startup.</para>
+
+ <para>By default, Valgrind uses the current "ulimit" value for
+ the stack size, or 16 MB, whichever is lower. In many cases
+ this gives a stack size in the range 8 to 16 MB, which almost
+ never overflows for most applications.</para>
+
+ <para>If you need a larger total stack size,
+ use <option>--main-stacksize</option> to specify it. Only set
+ it as high as you need, since reserving far more space than you
+ need (that is, hundreds of megabytes more than you need)
+ constrains Valgrind's memory allocators and may reduce the total
+ amount of memory that Valgrind can use. This is only really of
+ significance on 32-bit machines.</para>
+
+ <para>On Linux, you may request a stack of size up to 2GB.
+ Valgrind will stop with a diagnostic message if the stack cannot
+ be allocated. On AIX5 the allowed stack size is restricted to
+ 128MB.</para>
+
+ <para><option>--main-stacksize</option> only affects the stack
+ size for the program's initial thread. It has no bearing on the
+ size of thread stacks, as Valgrind does not allocate
+ those.</para>
+
+ <para>You may need to use both <option>--main-stacksize</option>
+ and <option>--max-stackframe</option> together. It is important
+ to understand that <option>--main-stacksize</option> sets the
+ maximum total stack size,
+ whilst <option>--max-stackframe</option> specifies the largest
+ size of any one stack frame. You will have to work out
+ the <option>--main-stacksize</option> value for yourself
+ (usually, if your applications segfaults). But Valgrind will
+ tell you the needed <option>--max-stackframe</option> size, if
+ necessary.</para>
+
+ <para>As discussed further in the description
+ of <option>--max-stackframe</option>, a requirement for a large
+ stack is a sign of potential portability problems. You are best
+ advised to place all large data in heap-allocated memory.</para>
+ </listitem>
+ </varlistentry>
+
</variablelist>
<!-- end of xi:include in the manpage -->
Modified: trunk/memcheck/tests/addressable.stderr.exp
===================================================================
--- trunk/memcheck/tests/addressable.stderr.exp 2007-12-15 23:08:35 UTC (rev 7301)
+++ trunk/memcheck/tests/addressable.stderr.exp 2007-12-18 01:49:23 UTC (rev 7302)
@@ -19,6 +19,10 @@
Access not within mapped region at address 0x........
at 0x........: test2 (addressable.c:51)
by 0x........: main (addressable.c:125)
+ If you believe this happened as a result of a stack overflow in your
+ program's main thread (unlikely but possible), you can try to increase
+ the size of the main thread stack using the --main-stacksize= flag.
+ The main thread stack size used in this run was 16777216.
ERROR SUMMARY: 2 errors from 2 contexts (suppressed: 0 from 0)
malloc/free: in use at exit: 0 bytes in 0 blocks.
Modified: trunk/memcheck/tests/badjump.stderr.exp
===================================================================
--- trunk/memcheck/tests/badjump.stderr.exp 2007-12-15 23:08:35 UTC (rev 7301)
+++ trunk/memcheck/tests/badjump.stderr.exp 2007-12-18 01:49:23 UTC (rev 7302)
@@ -8,6 +8,10 @@
Access not within mapped region at address 0x........
at 0x........: ???
by 0x........: (below main) (in /...libc...)
+ If you believe this happened as a result of a stack overflow in your
+ program's main thread (unlikely but possible), you can try to increase
+ the size of the main thread stack using the --main-stacksize= flag.
+ The main thread stack size used in this run was 16777216.
ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
malloc/free: in use at exit: 0 bytes in 0 blocks.
Modified: trunk/memcheck/tests/describe-block.stderr.exp
===================================================================
--- trunk/memcheck/tests/describe-block.stderr.exp 2007-12-15 23:08:35 UTC (rev 7301)
+++ trunk/memcheck/tests/describe-block.stderr.exp 2007-12-18 01:49:23 UTC (rev 7302)
@@ -7,6 +7,10 @@
Process terminating with default action of signal 11 (SIGSEGV)
Access not within mapped region at address 0x........
at 0x........: main (describe-block.c:6)
+ If you believe this happened as a result of a stack overflow in your
+ program's main thread (unlikely but possible), you can try to increase
+ the size of the main thread stack using the --main-stacksize= flag.
+ The main thread stack size used in this run was 16777216.
ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
malloc/free: in use at exit: 0 bytes in 0 blocks.
Modified: trunk/memcheck/tests/match-overrun.stderr.exp
===================================================================
--- trunk/memcheck/tests/match-overrun.stderr.exp 2007-12-15 23:08:35 UTC (rev 7301)
+++ trunk/memcheck/tests/match-overrun.stderr.exp 2007-12-18 01:49:23 UTC (rev 7302)
@@ -4,6 +4,10 @@
Access not within mapped region at address 0x........
at 0x........: a1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789 (match-overrun.c:6)
by 0x........: main (match-overrun.c:11)
+ If you believe this happened as a result of a stack overflow in your
+ program's main thread (unlikely but possible), you can try to increase
+ the size of the main thread stack using the --main-stacksize= flag.
+ The main thread stack size used in this run was 16777216.
ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
malloc/free: in use at exit: 0 bytes in 0 blocks.
Modified: trunk/memcheck/tests/supp_unknown.stderr.exp
===================================================================
--- trunk/memcheck/tests/supp_unknown.stderr.exp 2007-12-15 23:08:35 UTC (rev 7301)
+++ trunk/memcheck/tests/supp_unknown.stderr.exp 2007-12-18 01:49:23 UTC (rev 7302)
@@ -3,3 +3,7 @@
Access not within mapped region at address 0x........
at 0x........: ???
by 0x........: (below main) (in /...libc...)
+ If you believe this happened as a result of a stack overflow in your
+ program's main thread (unlikely but possible), you can try to increase
+ the size of the main thread stack using the --main-stacksize= flag.
+ The main thread stack size used in this run was 16777216.
Modified: trunk/none/tests/blockfault.stderr.exp
===================================================================
--- trunk/none/tests/blockfault.stderr.exp 2007-12-15 23:08:35 UTC (rev 7301)
+++ trunk/none/tests/blockfault.stderr.exp 2007-12-18 01:49:23 UTC (rev 7302)
@@ -3,4 +3,8 @@
Process terminating with default action of signal 11 (SIGSEGV)
Access not within mapped region at address 0x........
at 0x........: main (blockfault.c:32)
+ If you believe this happened as a result of a stack overflow in your
+ program's main thread (unlikely but possible), you can try to increase
+ the size of the main thread stack using the --main-stacksize= flag.
+ The main thread stack size used in this run was 16777216.
Modified: trunk/none/tests/cmdline1.stdout.exp
===================================================================
--- trunk/none/tests/cmdline1.stdout.exp 2007-12-15 23:08:35 UTC (rev 7301)
+++ trunk/none/tests/cmdline1.stdout.exp 2007-12-18 01:49:23 UTC (rev 7302)
@@ -40,6 +40,8 @@
--input-fd=<number> file descriptor for input [0=stdin]
--max-stackframe=<number> assume stack switch for SP changes larger
than <number> bytes [2000000]
+ --main-stacksize=<number> set size of main thread's stack (in bytes)
+ [use current 'ulimit' value]
user options for Nulgrind:
(none)
Modified: trunk/none/tests/cmdline2.stdout.exp
===================================================================
--- trunk/none/tests/cmdline2.stdout.exp 2007-12-15 23:08:35 UTC (rev 7301)
+++ trunk/none/tests/cmdline2.stdout.exp 2007-12-18 01:49:23 UTC (rev 7302)
@@ -40,6 +40,8 @@
--input-fd=<number> file descriptor for input [0=stdin]
--max-stackframe=<number> assume stack switch for SP changes larger
than <number> bytes [2000000]
+ --main-stacksize=<number> set size of main thread's stack (in bytes)
+ [use current 'ulimit' value]
user options for Nulgrind:
(none)
|
|
From: <js...@ac...> - 2007-12-18 01:21:53
|
Nightly build on g5 ( SuSE 10.1, ppc970 ) started at 2007-12-18 02:00:01 CET Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 288 tests, 27 stderr failures, 2 stdout failures, 0 post failures == memcheck/tests/deep_templates (stdout) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) none/tests/faultstatus (stderr) none/tests/fdleak_cmsg (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/hg02_deadlock (stderr) helgrind/tests/hg03_inherit (stderr) helgrind/tests/hg04_race (stderr) helgrind/tests/hg05_race2 (stderr) helgrind/tests/tc01_simple_race (stderr) helgrind/tests/tc05_simple_race (stderr) helgrind/tests/tc06_two_races (stderr) helgrind/tests/tc07_hbl1 (stderr) helgrind/tests/tc08_hbl2 (stderr) helgrind/tests/tc09_bad_unlock (stderr) helgrind/tests/tc11_XCHG (stderr) helgrind/tests/tc14_laog_dinphils (stderr) helgrind/tests/tc16_byterace (stderr) helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc19_shadowmem (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) helgrind/tests/tc23_bogus_condwait (stderr) helgrind/tests/tc24_nonzero_sem (stderr) |
|
From: Konstantin S. <kon...@gm...> - 2007-12-17 21:19:50
|
I have the same issue with our own message queues.
Helgrind does not support them out of the box but can be easily enhanced.
I did it with the help of source code changes:
void Put(T elem) { // put() method of your queue
sem_t *uniq_sem = new sem_t;
sem_init(uniq_sem);
sem_post(uniq_sem);
// now do the actual 'put' stuff, putting uniq_sem together with elem
}
T Get() { // get() method of your queue
// do the actual 'get' stuff; get uniq_sem from queue together with elem
sem_wait(uniq_sem);
sem_destroy(uniq_sem);
delete uniq_sem;
return elem;
}
The good thing is that you don't really need to create a real semaphore --
just create a integer with unique value (I use atomic_increment of static
var) and pass it to appropriate helgrind's user requests (see helgrind.h and
hg_intercepts.c).
The bad thing is that it might be challenging to achieve the same effect
without source code changes.
It is not enough to intercept the call to Put/Get routines -- you need to
put something into the queue (at least, I did not find another way).
--kcc
On Dec 17, 2007 10:23 PM, Sahni, Jitan <jit...@cr...>
wrote:
>
> does helgrind work properly on ACE Tasks and Message Queues ? I am
> getting some helgrind race alerts when using ACE Tasks and Message Queues.
> Basically the parent and child thread are created at start of the program
> and then parent passes buffer of data to child thru ACE Message Queue . the
> waiting child picks the data , uses it and frees it . helgrind is alerting
> me on race condition on this model . Does helgrind work correctly on this
> model of multi threads ( tasks and Queues ) ?
>
> Also is there any way to print memory contents of the location printed in
> a race condition ?
>
>
>
> ==============================================================================
> Please access the attached hyperlink for an important electronic communications disclaimer:
> http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html
> ==============================================================================
>
>
> -------------------------------------------------------------------------
> SF.Net email is sponsored by:
> Check out the new SourceForge.net Marketplace.
> It's the best place to buy or sell services
> for just about anything Open Source.
>
> http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
> _______________________________________________
> Valgrind-users mailing list
> Val...@li...
> https://lists.sourceforge.net/lists/listinfo/valgrind-users
>
>
|
|
From: John R.
|
Bart Van Assche wrote: >>The attached patch to valgrind-3.3.0 is what I am using to run >>UML (UserModeLinux) for x86 under memcheck on x86, thus checking >>the memory accesses of a linux kernel. > This sounds great. Can you share your experience with running UML under > memcheck ? Details will appear in use...@li... and have started to appear in lin...@vg... . In general, running UML under memcheck is much the same as running any large application under memcheck for the first time. "Bugs" crawl out of the woodwork left and right. The kernel allocators make some assumptions that are unfriendly to memcheck. Threads are a serious nuisance. Debugging is tedious. printf and "for(;;);" are your friends (to avoid timing-dependent problems, and so you can attach gdb to a specific address space.) Getting a traceback of UML under memcheck is cumbersome. The best progress often is made by writing a program to detect and isolate the bug (editing the source of the kernel and/or memcheck, recompiling, and re-running) rather than by general external application of gdb. The *option* for memcheck to complain as soon as possible, immediately upon fetch of uninit bits, without waiting for array indexing or a conditional jump/move, is sorely missed. -- John Reiser, jreiser@BitWagon.com |
|
From: Bart V. A. <bar...@gm...> - 2007-12-17 15:11:39
|
On 12/15/07, John Reiser <jr...@bi...> wrote: > > The attached patch to valgrind-3.3.0 is what I am using to run > UML (UserModeLinux) for x86 under memcheck on x86, thus checking > the memory accesses of a linux kernel. This sounds great. Can you share your experience with running UML under memcheck ? Regards, Bart. |
|
From: Tom H. <th...@cy...> - 2007-12-17 03:59:46
|
Nightly build on alvis ( i686, Red Hat 7.3 ) started at 2007-12-17 03:15:03 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 321 tests, 62 stderr failures, 1 stdout failure, 28 post failures == memcheck/tests/addressable (stderr) memcheck/tests/badjump (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/erringfds (stderr) memcheck/tests/leak-0 (stderr) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-pool-0 (stderr) memcheck/tests/leak-pool-1 (stderr) memcheck/tests/leak-pool-2 (stderr) memcheck/tests/leak-pool-3 (stderr) memcheck/tests/leak-pool-4 (stderr) memcheck/tests/leak-pool-5 (stderr) memcheck/tests/leak-regroot (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/long_namespace_xml (stderr) memcheck/tests/malloc_free_fill (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/noisy_child (stderr) memcheck/tests/partial_load_dflt (stderr) memcheck/tests/partial_load_ok (stderr) memcheck/tests/partiallydefinedeq (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/sigkill (stderr) memcheck/tests/stack_changes (stderr) memcheck/tests/x86/bug152022 (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) memcheck/tests/x86/xor-undef-x86 (stderr) memcheck/tests/xml1 (stderr) massif/tests/alloc-fns-A (post) massif/tests/alloc-fns-B (post) massif/tests/basic (post) massif/tests/basic2 (post) massif/tests/big-alloc (post) massif/tests/culling1 (stderr) massif/tests/culling2 (stderr) massif/tests/custom_alloc (post) massif/tests/deep-A (post) massif/tests/deep-B (stderr) massif/tests/deep-B (post) massif/tests/deep-C (stderr) massif/tests/deep-C (post) massif/tests/deep-D (post) massif/tests/ignoring (post) massif/tests/insig (post) massif/tests/long-time (post) massif/tests/new-cpp (post) massif/tests/null (post) massif/tests/one (post) massif/tests/overloaded-new (post) massif/tests/peak (post) massif/tests/peak2 (stderr) massif/tests/peak2 (post) massif/tests/realloc (stderr) massif/tests/realloc (post) massif/tests/thresholds_0_0 (post) massif/tests/thresholds_0_10 (post) massif/tests/thresholds_10_0 (post) massif/tests/thresholds_10_10 (post) massif/tests/thresholds_5_0 (post) massif/tests/thresholds_5_10 (post) massif/tests/zero1 (post) massif/tests/zero2 (post) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/hg01_all_ok (stderr) helgrind/tests/hg02_deadlock (stderr) helgrind/tests/hg03_inherit (stderr) helgrind/tests/hg04_race (stderr) helgrind/tests/hg05_race2 (stderr) helgrind/tests/hg06_readshared (stderr) helgrind/tests/tc01_simple_race (stderr) helgrind/tests/tc02_simple_tls (stderr) helgrind/tests/tc03_re_excl (stderr) helgrind/tests/tc05_simple_race (stderr) helgrind/tests/tc06_two_races (stderr) helgrind/tests/tc07_hbl1 (stderr) helgrind/tests/tc08_hbl2 (stderr) helgrind/tests/tc09_bad_unlock (stderr) helgrind/tests/tc11_XCHG (stderr) helgrind/tests/tc12_rwl_trivial (stderr) helgrind/tests/tc14_laog_dinphils (stderr) helgrind/tests/tc16_byterace (stderr) helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc19_shadowmem (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) helgrind/tests/tc23_bogus_condwait (stderr) helgrind/tests/tc24_nonzero_sem (stderr) |
|
From: Tom H. <th...@cy...> - 2007-12-17 03:36:40
|
Nightly build on lloyd ( x86_64, Fedora 7 ) started at 2007-12-17 03:05:08 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 355 tests, 7 stderr failures, 2 stdout failures, 0 post failures == memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc22_exit_w_lock (stderr) |
|
From: Tom H. <th...@cy...> - 2007-12-17 03:27:37
|
Nightly build on dellow ( x86_64, Fedora 8 ) started at 2007-12-17 03:10:05 GMT Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 355 tests, 9 stderr failures, 4 stdout failures, 0 post failures == memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) none/tests/pth_cvsimple (stdout) none/tests/pth_detached (stdout) helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc22_exit_w_lock (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 355 tests, 8 stderr failures, 2 stdout failures, 0 post failures == memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc22_exit_w_lock (stderr) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Mon Dec 17 03:19:24 2007 --- new.short Mon Dec 17 03:27:40 2007 *************** *** 8,10 **** ! == 355 tests, 8 stderr failures, 2 stdout failures, 0 post failures == memcheck/tests/malloc_free_fill (stderr) --- 8,10 ---- ! == 355 tests, 9 stderr failures, 4 stdout failures, 0 post failures == memcheck/tests/malloc_free_fill (stderr) *************** *** 16,17 **** --- 16,20 ---- none/tests/mremap2 (stdout) + none/tests/pth_cvsimple (stdout) + none/tests/pth_detached (stdout) + helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc18_semabuse (stderr) |
|
From: Tom H. <th...@cy...> - 2007-12-17 03:14:25
|
Nightly build on gill ( x86_64, Fedora Core 2 ) started at 2007-12-17 03:00:02 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 357 tests, 24 stderr failures, 1 stdout failure, 0 post failures == memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) none/tests/fdleak_fcntl (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/hg01_all_ok (stderr) helgrind/tests/hg02_deadlock (stderr) helgrind/tests/hg03_inherit (stderr) helgrind/tests/hg04_race (stderr) helgrind/tests/hg05_race2 (stderr) helgrind/tests/tc01_simple_race (stderr) helgrind/tests/tc05_simple_race (stderr) helgrind/tests/tc06_two_races (stderr) helgrind/tests/tc09_bad_unlock (stderr) helgrind/tests/tc14_laog_dinphils (stderr) helgrind/tests/tc16_byterace (stderr) helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc19_shadowmem (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) helgrind/tests/tc23_bogus_condwait (stderr) |
|
From: <js...@ac...> - 2007-12-17 01:22:03
|
Nightly build on g5 ( SuSE 10.1, ppc970 ) started at 2007-12-17 02:00:01 CET Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 288 tests, 27 stderr failures, 2 stdout failures, 0 post failures == memcheck/tests/deep_templates (stdout) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) none/tests/faultstatus (stderr) none/tests/fdleak_cmsg (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/hg02_deadlock (stderr) helgrind/tests/hg03_inherit (stderr) helgrind/tests/hg04_race (stderr) helgrind/tests/hg05_race2 (stderr) helgrind/tests/tc01_simple_race (stderr) helgrind/tests/tc05_simple_race (stderr) helgrind/tests/tc06_two_races (stderr) helgrind/tests/tc07_hbl1 (stderr) helgrind/tests/tc08_hbl2 (stderr) helgrind/tests/tc09_bad_unlock (stderr) helgrind/tests/tc11_XCHG (stderr) helgrind/tests/tc14_laog_dinphils (stderr) helgrind/tests/tc16_byterace (stderr) helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc19_shadowmem (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) helgrind/tests/tc23_bogus_condwait (stderr) helgrind/tests/tc24_nonzero_sem (stderr) |
|
From: Nicholas N. <nj...@cs...> - 2007-12-16 18:51:04
|
On Sun, 16 Dec 2007, Dirk Mueller wrote: >> Log: >> Remove client requests that were deprecated in 3.2.0. > > was it really necessary to break applications depending on those macros (which > have not even been -Wdeprecated!) between RC1 and final? > > They did not even introduce any overhead. I didn't know about the 'deprecated' attribute, thanks for the tip. In this case, I don't think we could have used it because they were macros. Well, I guess we could have factored out the relevant code deeper within Memcheck intro functions, but that would have required duplicating real code to distinguish between the deprecated and non-deprecated versions. These macros were marked as deprecated in the 3.2.0 release notes, so there was considerable warning. Sorry for the inconvenience. Nick |
|
From: Dirk M. <dm...@gm...> - 2007-12-16 14:50:09
|
On Tuesday 04 December 2007, sv...@va... wrote: > Author: njn > Date: 2007-12-04 21:18:06 +0000 (Tue, 04 Dec 2007) > New Revision: 7274 > > Log: > Remove client requests that were deprecated in 3.2.0. was it really necessary to break applications depending on those macros (which have not even been -Wdeprecated!) between RC1 and final? They did not even introduce any overhead. Dirk |
|
From: Tom H. <th...@cy...> - 2007-12-16 03:59:58
|
Nightly build on alvis ( i686, Red Hat 7.3 ) started at 2007-12-16 03:15:03 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 321 tests, 62 stderr failures, 1 stdout failure, 28 post failures == memcheck/tests/addressable (stderr) memcheck/tests/badjump (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/erringfds (stderr) memcheck/tests/leak-0 (stderr) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-pool-0 (stderr) memcheck/tests/leak-pool-1 (stderr) memcheck/tests/leak-pool-2 (stderr) memcheck/tests/leak-pool-3 (stderr) memcheck/tests/leak-pool-4 (stderr) memcheck/tests/leak-pool-5 (stderr) memcheck/tests/leak-regroot (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/long_namespace_xml (stderr) memcheck/tests/malloc_free_fill (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/noisy_child (stderr) memcheck/tests/partial_load_dflt (stderr) memcheck/tests/partial_load_ok (stderr) memcheck/tests/partiallydefinedeq (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/sigkill (stderr) memcheck/tests/stack_changes (stderr) memcheck/tests/x86/bug152022 (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) memcheck/tests/x86/xor-undef-x86 (stderr) memcheck/tests/xml1 (stderr) massif/tests/alloc-fns-A (post) massif/tests/alloc-fns-B (post) massif/tests/basic (post) massif/tests/basic2 (post) massif/tests/big-alloc (post) massif/tests/culling1 (stderr) massif/tests/culling2 (stderr) massif/tests/custom_alloc (post) massif/tests/deep-A (post) massif/tests/deep-B (stderr) massif/tests/deep-B (post) massif/tests/deep-C (stderr) massif/tests/deep-C (post) massif/tests/deep-D (post) massif/tests/ignoring (post) massif/tests/insig (post) massif/tests/long-time (post) massif/tests/new-cpp (post) massif/tests/null (post) massif/tests/one (post) massif/tests/overloaded-new (post) massif/tests/peak (post) massif/tests/peak2 (stderr) massif/tests/peak2 (post) massif/tests/realloc (stderr) massif/tests/realloc (post) massif/tests/thresholds_0_0 (post) massif/tests/thresholds_0_10 (post) massif/tests/thresholds_10_0 (post) massif/tests/thresholds_10_10 (post) massif/tests/thresholds_5_0 (post) massif/tests/thresholds_5_10 (post) massif/tests/zero1 (post) massif/tests/zero2 (post) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/hg01_all_ok (stderr) helgrind/tests/hg02_deadlock (stderr) helgrind/tests/hg03_inherit (stderr) helgrind/tests/hg04_race (stderr) helgrind/tests/hg05_race2 (stderr) helgrind/tests/hg06_readshared (stderr) helgrind/tests/tc01_simple_race (stderr) helgrind/tests/tc02_simple_tls (stderr) helgrind/tests/tc03_re_excl (stderr) helgrind/tests/tc05_simple_race (stderr) helgrind/tests/tc06_two_races (stderr) helgrind/tests/tc07_hbl1 (stderr) helgrind/tests/tc08_hbl2 (stderr) helgrind/tests/tc09_bad_unlock (stderr) helgrind/tests/tc11_XCHG (stderr) helgrind/tests/tc12_rwl_trivial (stderr) helgrind/tests/tc14_laog_dinphils (stderr) helgrind/tests/tc16_byterace (stderr) helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc19_shadowmem (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) helgrind/tests/tc23_bogus_condwait (stderr) helgrind/tests/tc24_nonzero_sem (stderr) |
|
From: Tom H. <th...@cy...> - 2007-12-16 03:36:59
|
Nightly build on lloyd ( x86_64, Fedora 7 ) started at 2007-12-16 03:05:12 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 355 tests, 7 stderr failures, 2 stdout failures, 0 post failures == memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc22_exit_w_lock (stderr) |
|
From: Tom H. <th...@cy...> - 2007-12-16 03:27:43
|
Nightly build on dellow ( x86_64, Fedora 8 ) started at 2007-12-16 03:10:05 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 355 tests, 8 stderr failures, 3 stdout failures, 0 post failures == memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) none/tests/pth_detached (stdout) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc22_exit_w_lock (stderr) |
|
From: Tom H. <th...@cy...> - 2007-12-16 03:16:33
|
Nightly build on gill ( x86_64, Fedora Core 2 ) started at 2007-12-16 03:00:03 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 357 tests, 24 stderr failures, 1 stdout failure, 0 post failures == memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) none/tests/fdleak_fcntl (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/hg01_all_ok (stderr) helgrind/tests/hg02_deadlock (stderr) helgrind/tests/hg03_inherit (stderr) helgrind/tests/hg04_race (stderr) helgrind/tests/hg05_race2 (stderr) helgrind/tests/tc01_simple_race (stderr) helgrind/tests/tc05_simple_race (stderr) helgrind/tests/tc06_two_races (stderr) helgrind/tests/tc09_bad_unlock (stderr) helgrind/tests/tc14_laog_dinphils (stderr) helgrind/tests/tc16_byterace (stderr) helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc19_shadowmem (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) helgrind/tests/tc23_bogus_condwait (stderr) |
|
From: <js...@ac...> - 2007-12-16 01:21:44
|
Nightly build on g5 ( SuSE 10.1, ppc970 ) started at 2007-12-16 02:00:01 CET Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 288 tests, 27 stderr failures, 2 stdout failures, 0 post failures == memcheck/tests/deep_templates (stdout) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) none/tests/faultstatus (stderr) none/tests/fdleak_cmsg (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/hg02_deadlock (stderr) helgrind/tests/hg03_inherit (stderr) helgrind/tests/hg04_race (stderr) helgrind/tests/hg05_race2 (stderr) helgrind/tests/tc01_simple_race (stderr) helgrind/tests/tc05_simple_race (stderr) helgrind/tests/tc06_two_races (stderr) helgrind/tests/tc07_hbl1 (stderr) helgrind/tests/tc08_hbl2 (stderr) helgrind/tests/tc09_bad_unlock (stderr) helgrind/tests/tc11_XCHG (stderr) helgrind/tests/tc14_laog_dinphils (stderr) helgrind/tests/tc16_byterace (stderr) helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc19_shadowmem (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) helgrind/tests/tc23_bogus_condwait (stderr) helgrind/tests/tc24_nonzero_sem (stderr) |
|
From: <sv...@va...> - 2007-12-15 23:08:35
|
Author: sewardj
Date: 2007-12-15 23:08:35 +0000 (Sat, 15 Dec 2007)
New Revision: 7301
Log:
Print a nice message if allocation of the stack fails, rather than just
asserting.
Modified:
trunk/coregrind/m_initimg/initimg-linux.c
Modified: trunk/coregrind/m_initimg/initimg-linux.c
===================================================================
--- trunk/coregrind/m_initimg/initimg-linux.c 2007-12-15 22:13:05 UTC (rev 7300)
+++ trunk/coregrind/m_initimg/initimg-linux.c 2007-12-15 23:08:35 UTC (rev 7301)
@@ -44,6 +44,7 @@
#include "pub_core_machine.h"
#include "pub_core_ume.h"
#include "pub_core_options.h"
+#include "pub_core_syscall.h"
#include "pub_core_tooliface.h" /* VG_TRACK */
#include "pub_core_threadstate.h" /* ThreadArchState */
#include "pub_core_initimg.h" /* self */
@@ -572,19 +573,32 @@
/* Create a shrinkable reservation followed by an anonymous
segment. Together these constitute a growdown stack. */
+ res = VG_(mk_SysRes_Error)(0);
ok = VG_(am_create_reservation)(
resvn_start,
resvn_size -inner_HACK,
SmUpper,
anon_size +inner_HACK
);
+ if (ok) {
+ /* allocate a stack - mmap enough space for the stack */
+ res = VG_(am_mmap_anon_fixed_client)(
+ anon_start -inner_HACK,
+ anon_size +inner_HACK,
+ VKI_PROT_READ|VKI_PROT_WRITE|VKI_PROT_EXEC
+ );
+ }
+ if ((!ok) || res.isError) {
+ /* Allocation of the stack failed. We have to stop. */
+ VG_(printf)("valgrind: "
+ "I failed to allocate space for the application's stack.\n");
+ VG_(printf)("valgrind: "
+ "This may be the result of a very large --max-stackframe=\n");
+ VG_(printf)("valgrind: setting. Cannot continue. Sorry.\n\n");
+ VG_(exit)(0);
+ }
+
vg_assert(ok);
- /* allocate a stack - mmap enough space for the stack */
- res = VG_(am_mmap_anon_fixed_client)(
- anon_start -inner_HACK,
- anon_size +inner_HACK,
- VKI_PROT_READ|VKI_PROT_WRITE|VKI_PROT_EXEC
- );
vg_assert(!res.isError);
}
|
|
From: <sv...@va...> - 2007-12-15 22:13:06
|
Author: sewardj
Date: 2007-12-15 22:13:05 +0000 (Sat, 15 Dec 2007)
New Revision: 7300
Log:
When allocating space for the client stack on Linux, take notice of
the --max-stackframe value. This makes it possible to run programs
with very large (primary) stack requirements simply by specifying
--max-stackframe.
Modified:
trunk/coregrind/m_initimg/initimg-linux.c
trunk/coregrind/m_main.c
trunk/coregrind/m_stacktrace.c
Modified: trunk/coregrind/m_initimg/initimg-linux.c
===================================================================
--- trunk/coregrind/m_initimg/initimg-linux.c 2007-12-12 11:42:33 UTC (rev 7299)
+++ trunk/coregrind/m_initimg/initimg-linux.c 2007-12-15 22:13:05 UTC (rev 7300)
@@ -859,13 +859,25 @@
// p: fix_environment() [for 'env']
//--------------------------------------------------------------
{
+ /* When allocating space for the client stack on Linux, take
+ notice of the --max-stackframe value. This makes it possible
+ to run programs with very large (primary) stack requirements
+ simply by specifying --max-stackframe. */
void* init_sp = iicii.argv - 1;
SizeT m1 = 1024 * 1024;
SizeT m16 = 16 * m1;
+ SizeT msf = VG_(clo_max_stackframe) + m1;
VG_(debugLog)(1, "initimg", "Setup client stack\n");
+ /* For the max stack size, use the client's stack rlimit, but
+ clamp it to between 1M and 16M. */
iifii.clstack_max_size = (SizeT)VG_(client_rlimit_stack).rlim_cur;
if (iifii.clstack_max_size < m1) iifii.clstack_max_size = m1;
if (iifii.clstack_max_size > m16) iifii.clstack_max_size = m16;
+ /* However, if --max-stackframe= is specified, and the given
+ value (+ 1 M for spare) exceeds the current setting, use the
+ max-stackframe input instead. */
+
+ if (iifii.clstack_max_size < msf) iifii.clstack_max_size = msf;
iifii.clstack_max_size = VG_PGROUNDUP(iifii.clstack_max_size);
iifii.initial_client_SP
@@ -877,11 +889,15 @@
VG_(debugLog)(2, "initimg",
"Client info: "
- "initial_IP=%p initial_SP=%p initial_TOC=%p brk_base=%p\n",
+ "initial_IP=%p initial_TOC=%p brk_base=%p\n",
(void*)(iifii.initial_client_IP),
- (void*)(iifii.initial_client_SP),
(void*)(iifii.initial_client_TOC),
(void*)VG_(brk_base) );
+ VG_(debugLog)(2, "initimg",
+ "Client info: "
+ "initial_SP=%p max_stack_size=%ld\n",
+ (void*)(iifii.initial_client_SP),
+ (SizeT)iifii.clstack_max_size );
}
//--------------------------------------------------------------
Modified: trunk/coregrind/m_main.c
===================================================================
--- trunk/coregrind/m_main.c 2007-12-12 11:42:33 UTC (rev 7299)
+++ trunk/coregrind/m_main.c 2007-12-15 22:13:05 UTC (rev 7300)
@@ -244,7 +244,8 @@
/* Peer at previously set up VG_(args_for_valgrind) and extract any
- request for help and also the tool name. */
+ request for help and also the tool name, and also set up
+ VG_(clo_max_stackframe). */
static void get_helprequest_and_toolname ( Int* need_help, HChar** tool )
{
@@ -276,7 +277,13 @@
// here.
} else if (VG_CLO_STREQN(7, str, "--tool=")) {
*tool = &str[7];
- }
+
+ // Set up VG_(clo_max_stackframe). This is needed by
+ // VG_(ii_create_image), which happens before
+ // process_command_line_options().
+ } else VG_NUM_CLO (str, "--max-stackframe",
+ VG_(clo_max_stackframe));
+
}
}
@@ -368,6 +375,9 @@
else VG_BOOL_CLO(arg, "--error-limit", VG_(clo_error_limit))
else VG_NUM_CLO (arg, "--error-exitcode", VG_(clo_error_exitcode))
else VG_BOOL_CLO(arg, "--show-emwarns", VG_(clo_show_emwarns))
+ /* Already done in get_helprequest_and_toolname, but we need to
+ redundantly handle it again, so the flag does not get
+ rejected as invalid. */
else VG_NUM_CLO (arg, "--max-stackframe", VG_(clo_max_stackframe))
else VG_BOOL_CLO(arg, "--run-libc-freeres", VG_(clo_run_libc_freeres))
else VG_BOOL_CLO(arg, "--show-below-main", VG_(clo_show_below_main))
Modified: trunk/coregrind/m_stacktrace.c
===================================================================
--- trunk/coregrind/m_stacktrace.c 2007-12-12 11:42:33 UTC (rev 7299)
+++ trunk/coregrind/m_stacktrace.c 2007-12-15 22:13:05 UTC (rev 7300)
@@ -97,11 +97,9 @@
/* Assertion broken before main() is reached in pthreaded programs; the
* offending stack traces only have one item. --njn, 2002-aug-16 */
/* vg_assert(fp_min <= fp_max);*/
-
- if (fp_min + VG_(clo_max_stackframe) <= fp_max) {
- /* If the stack is ridiculously big, don't poke around ... but
- don't bomb out either. Needed to make John Regehr's
- user-space threads package work. JRS 20021001 */
+ if (fp_min + 512 >= fp_max) {
+ /* If the stack limits look bogus, don't poke around ... but
+ don't bomb out either. */
ips[0] = ip;
return 1;
}
|