You can subscribe to this list here.
| 2003 |
Jan
|
Feb
|
Mar
(58) |
Apr
(261) |
May
(169) |
Jun
(214) |
Jul
(201) |
Aug
(219) |
Sep
(198) |
Oct
(203) |
Nov
(241) |
Dec
(94) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2004 |
Jan
(137) |
Feb
(149) |
Mar
(150) |
Apr
(193) |
May
(95) |
Jun
(173) |
Jul
(137) |
Aug
(236) |
Sep
(157) |
Oct
(150) |
Nov
(136) |
Dec
(90) |
| 2005 |
Jan
(139) |
Feb
(130) |
Mar
(274) |
Apr
(138) |
May
(184) |
Jun
(152) |
Jul
(261) |
Aug
(409) |
Sep
(239) |
Oct
(241) |
Nov
(260) |
Dec
(137) |
| 2006 |
Jan
(191) |
Feb
(142) |
Mar
(169) |
Apr
(75) |
May
(141) |
Jun
(169) |
Jul
(131) |
Aug
(141) |
Sep
(192) |
Oct
(176) |
Nov
(142) |
Dec
(95) |
| 2007 |
Jan
(98) |
Feb
(120) |
Mar
(93) |
Apr
(96) |
May
(95) |
Jun
(65) |
Jul
(62) |
Aug
(56) |
Sep
(53) |
Oct
(95) |
Nov
(106) |
Dec
(87) |
| 2008 |
Jan
(58) |
Feb
(149) |
Mar
(175) |
Apr
(110) |
May
(106) |
Jun
(72) |
Jul
(55) |
Aug
(89) |
Sep
(26) |
Oct
(96) |
Nov
(83) |
Dec
(93) |
| 2009 |
Jan
(97) |
Feb
(106) |
Mar
(74) |
Apr
(64) |
May
(115) |
Jun
(83) |
Jul
(137) |
Aug
(103) |
Sep
(56) |
Oct
(59) |
Nov
(61) |
Dec
(37) |
| 2010 |
Jan
(94) |
Feb
(71) |
Mar
(53) |
Apr
(105) |
May
(79) |
Jun
(111) |
Jul
(110) |
Aug
(81) |
Sep
(50) |
Oct
(82) |
Nov
(49) |
Dec
(21) |
| 2011 |
Jan
(87) |
Feb
(105) |
Mar
(108) |
Apr
(99) |
May
(91) |
Jun
(94) |
Jul
(114) |
Aug
(77) |
Sep
(58) |
Oct
(58) |
Nov
(131) |
Dec
(62) |
| 2012 |
Jan
(76) |
Feb
(93) |
Mar
(68) |
Apr
(95) |
May
(62) |
Jun
(109) |
Jul
(90) |
Aug
(87) |
Sep
(49) |
Oct
(54) |
Nov
(66) |
Dec
(84) |
| 2013 |
Jan
(67) |
Feb
(52) |
Mar
(93) |
Apr
(65) |
May
(33) |
Jun
(34) |
Jul
(52) |
Aug
(42) |
Sep
(52) |
Oct
(48) |
Nov
(66) |
Dec
(14) |
| 2014 |
Jan
(66) |
Feb
(51) |
Mar
(34) |
Apr
(47) |
May
(58) |
Jun
(27) |
Jul
(52) |
Aug
(41) |
Sep
(78) |
Oct
(30) |
Nov
(28) |
Dec
(26) |
| 2015 |
Jan
(41) |
Feb
(42) |
Mar
(20) |
Apr
(73) |
May
(31) |
Jun
(48) |
Jul
(23) |
Aug
(55) |
Sep
(36) |
Oct
(47) |
Nov
(48) |
Dec
(41) |
| 2016 |
Jan
(32) |
Feb
(34) |
Mar
(33) |
Apr
(22) |
May
(14) |
Jun
(31) |
Jul
(29) |
Aug
(41) |
Sep
(17) |
Oct
(27) |
Nov
(38) |
Dec
(28) |
| 2017 |
Jan
(28) |
Feb
(30) |
Mar
(16) |
Apr
(9) |
May
(27) |
Jun
(57) |
Jul
(28) |
Aug
(43) |
Sep
(31) |
Oct
(20) |
Nov
(24) |
Dec
(18) |
| 2018 |
Jan
(34) |
Feb
(50) |
Mar
(18) |
Apr
(26) |
May
(13) |
Jun
(31) |
Jul
(13) |
Aug
(11) |
Sep
(15) |
Oct
(12) |
Nov
(18) |
Dec
(13) |
| 2019 |
Jan
(12) |
Feb
(29) |
Mar
(51) |
Apr
(22) |
May
(13) |
Jun
(20) |
Jul
(13) |
Aug
(12) |
Sep
(21) |
Oct
(6) |
Nov
(9) |
Dec
(5) |
| 2020 |
Jan
(13) |
Feb
(5) |
Mar
(25) |
Apr
(4) |
May
(40) |
Jun
(27) |
Jul
(5) |
Aug
(17) |
Sep
(21) |
Oct
(1) |
Nov
(5) |
Dec
(15) |
| 2021 |
Jan
(28) |
Feb
(6) |
Mar
(11) |
Apr
(5) |
May
(7) |
Jun
(8) |
Jul
(5) |
Aug
(5) |
Sep
(11) |
Oct
(9) |
Nov
(10) |
Dec
(12) |
| 2022 |
Jan
(7) |
Feb
(13) |
Mar
(8) |
Apr
(7) |
May
(12) |
Jun
(27) |
Jul
(14) |
Aug
(27) |
Sep
(27) |
Oct
(17) |
Nov
(17) |
Dec
|
| 2023 |
Jan
(10) |
Feb
(18) |
Mar
(9) |
Apr
(26) |
May
|
Jun
(13) |
Jul
(18) |
Aug
(5) |
Sep
(6) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
1
(6) |
2
(16) |
3
(3) |
4
(11) |
5
(2) |
|
6
(4) |
7
(11) |
8
(4) |
9
(6) |
10
(25) |
11
(10) |
12
(1) |
|
13
(2) |
14
(6) |
15
(16) |
16
(19) |
17
(16) |
18
(5) |
19
|
|
20
(4) |
21
(5) |
22
(21) |
23
(4) |
24
(14) |
25
(3) |
26
(9) |
|
27
(3) |
28
(13) |
29
(6) |
30
(16) |
|
|
|
|
From: Jeremy F. <je...@go...> - 2003-04-09 23:13:26
|
Quoting Julian Seward <js...@ac...>: > Try the patch called 09-rdtsc-calibration from > http://www.goop.org/~jeremy/valgrind/ I doubt that will help much - that just stops an assertion failure when getting the calibration. The basic problem is that the TSC is variable-rate, and therefore useless as a timebase. J |
|
From: Bastien C. <ba...@ch...> - 2003-04-09 22:10:20
|
On Wednesday 09 April 2003 19:37, you wrote:
> My guess is that the STL allocator keeps the memory around. What happen=
s
> if you call f1() several times in main()?
No change, at least for this small example. But I continued to play a bit=
with=20
containers my program uses and came out with this gem:
------------------------------------------------
#include<iostream>
#include<deque>
#include<set>
using namespace std;
void f1(int ic)
{
set<char> n;
for(char c=3D'a'; c < 'z'; c++) n.insert(c);
deque<set<char> > v;
for(int i=3D0; i<ic; i++) v.push_back(n);
}
int main(){
f1(1000);
cout << "The memory footprint ..." << endl;
f1(20000);
cout << "... should be ..." << endl;
f1(50000);
cout << "... near zero exactly now! (it isn't *sigh*)" << endl;
//while(1);
return 0;
}
------------------------------------------------
Everyone's invited to let this run on their system ...
------------------------------------------------
=3D=3D13669=3D=3D LEAK SUMMARY:
=3D=3D13669=3D=3D definitely lost: 16 bytes in 1 blocks.
=3D=3D13669=3D=3D possibly lost: 0 bytes in 0 blocks.
=3D=3D13669=3D=3D still reachable: 32568232 bytes in 127 blocks.
=3D=3D13669=3D=3D suppressed: 0 bytes in 0 blocks.
------------------------------------------------
=2E.. and play with it: the numbers get lower when stopping after f1(2000=
) or=20
f1(20000)). Best thing is, when one uncomments the while(1); statement th=
e=20
memory footprint is around 80M (where it shouldn't be much greater than t=
he=20
size of the executable).
I digged the news a bit and found this:=20
http://groups.google.com/groups?hl=3Den&lr=3D&ie=3DUTF-8&oe=3Dutf-8&frame=
=3Dright&th=3D432bcc216e83d78f&seekm=3Dslrnaa3v4d.lt.mixtim_nospam%40taco=
=2Emixtim.ispwest.com#link3
Here's one interesting part:
> The default allocator for many c++ standard libraries (such as the one =
that
> ships with gcc) never actually "frees" memory. It just adds the memory =
back
> to a pool for later use. So, if you allocate a map that contains 80 MB
> worth of data and then destroy the map, your application still has that=
80
> MB allocated and will until your program exits.
On the other hand, after the program exited, valgrind should not find any=
=20
leaks (the STL pool should have been freed, right?). Any ideas?
So the STL is to 'blame'. The description of the pool behaviour should go=
into=20
the FAQ of valgrind, though, I'm sure other people tripped (are tripping,=
=20
will trip) over that too.
And now for something completely related (but going off topic): is there =
any=20
way to "flush" that pool?=20
Regards,
Bastien
PS: Did I already thank the valgrind author? No? I longed for a tool like=
that=20
for Linux since I first worked with purify on a SUN. Thanks a lot.
--=20
-- The universe has its own cure for stupidity. --
-- Unfortunately, it doesn't always apply it. --
|
|
From: David E. <da...@2g...> - 2003-04-09 17:37:55
|
On Wed, 2003-04-09 at 15:45, Bastien Chevreux wrote:
> Hello there,
>
> I have serious troubles with possible memory leaks in programs heavily
> using the STL. I am not able to tell whether it is a problem with the
> compiler, the STL or wrong valgrind output, so I'll start here before
> filing in a gcc bug report.
>
> One of my programs grew larger and larger without I knew why, so took
> valgrind to look and started building testcases to find out what was
> happening. I have a SuSE 8.1 distribution, that's kernel 2.4.x,
> glibc2.2 and gcc version 3.2
>
> Consider this test case example:
>
> -------------------------------------------------------
> #include <vector>
> #include <ext/hash_map>
> using namespace __gnu_cxx;
>
> void f1()
> {
> int n=42;
> vector<int> v;
> for(int i=0; i<1000000; i++) v.push_back(n);
> }
>
> int main(){
> f1();
> return 0;
> }
> ----------------------------------------------------------
>
> g++ -g -o test test.C
>
> and then
>
> valgrind --leak-resolution=high --num-callers=20 --show-reachable=yes --leak-check=yes ./test
>
> I will get the following summary:
>
> ----------------------------------------------------------
> ==16880== LEAK SUMMARY:
> ==16880== definitely lost: 16 bytes in 1 blocks.
> ==16880== possibly lost: 0 bytes in 0 blocks.
> ==16880== still reachable: 6912 bytes in 4 blocks.
> ----------------------------------------------------------
>
> The number which troubles me ist the bytes that are still
> reachable. Here's, the detail:
>
> ----------------------------------------------------------
> ==19169== 6848 bytes in 3 blocks are still reachable in loss record 3 of 3
> ==19169== at 0x4015DE3B: __builtin_new (vg_clientfuncs.c:129)
> ==19169== by 0x4015DE76: operator new(unsigned) (vg_clientfuncs.c:142)
> ==19169== by 0x40278E00: std::__default_alloc_template<true, 0>::_S_chunk_alloc(unsigned, int&) (in /usr/lib/libstdc++.so.5.0.0)
> ==19169== by 0x40278D1C: std::__default_alloc_template<true, 0>::_S_refill(unsigned) (in /usr/lib/libstdc++.so.5.0.0)
> ==19169== by 0x402788EF: std::__default_alloc_template<true, 0>::allocate(unsigned) (in /usr/lib/libstdc++.so.5.0.0)
> ==19169== by 0x8049008: std::__simple_alloc<int, std::__default_alloc_template<true, 0> >::allocate(unsigned) (/usr/include/g++/bits/stl_alloc.h:224)
> ==19169== by 0x8048D7E: std::_Vector_alloc_base<int, std::allocator<int>, true>::_M_allocate(unsigned) (/usr/include/g++/bits/stl_vector.h:121)
> ==19169== by 0x8048A45: std::vector<int, std::allocator<int> >::_M_insert_aux(__gnu_cxx::__normal_iterator<int*, std::vector<int, std::allocator<int> > >, int const&) (/usr/include/g++/bits/stl_vector.h:898)
> ==19169== by 0x804884C: std::vector<int, std::allocator<int> >::push_back(int const&) (/usr/include/g++/bits/stl_vector.h:496)
> ==19169== by 0x80486A1: f1() (test2.C:10)
> ==19169== by 0x80487A2: main (test2.C:21)
> ==19169== by 0x403094A1: __libc_start_main (in /lib/libc.so.6)
> ==19169== by 0x8048580: (within /home/bach/work/assembly/htga/src/progs/test)
> ----------------------------------------------------------
>
> Regarding the program above, I sincerely do think that there should be
> no leak at all, even not in "reachable" parts.
>
> Now, a few bytes don't hurt. Unfortunately, when I let run my real
> program, here's what I get (for really small data sets):
>
> ----------------------------------------------------------
> ==698== LEAK SUMMARY:
> ==698== definitely lost: 24825 bytes in 3492 blocks.
> ==698== possibly lost: 1398 bytes in 3 blocks.
> ==698== still reachable: 1125492 bytes in 65 blocks.
> ----------------------------------------------------------
>
> (please note that I don't care about the that the definitely and
> possibly lost numbers, these I can trace back to real oversights in my
> code.)
>
> The "still reachable" 1M number is about 40 times greater than the
> other two numbers added together and I have the distinct impression
> that the memory is really eaten away somewhere:
> 1) all valgrind detail messages are more or less similar to the one of
> the test case above, all have something to do with containers
> 2) putting a "while(1)" loop at a distinctive point in my program
> where everything should have been more or less freed after some
> heavy computation (using about any existing STL container type
> that exists with dozens of different classes) gives me remaining
> memory footprints of >1G (yes, that's gigabyte).
>
> Now my question: any idea where to start searching? is valgrind at
> fault (which I don't think, but one never knows)? the gnu STL? the gnu
> g++ compiler?
>
> Any suggestion welcome.
My guess is that the STL allocator keeps the memory around. What happens
if you call f1() several times in main()?
--
-\- David Eriksson -/- www.2GooD.nu
"I personally refuse to use inferior tools because of ideology."
- Linus Torvalds
|
|
From: Bastien C. <ba...@ch...> - 2003-04-09 13:45:07
|
Hello there,
I have serious troubles with possible memory leaks in programs heavily
using the STL. I am not able to tell whether it is a problem with the
compiler, the STL or wrong valgrind output, so I'll start here before
filing in a gcc bug report.
One of my programs grew larger and larger without I knew why, so took
valgrind to look and started building testcases to find out what was
happening. I have a SuSE 8.1 distribution, that's kernel 2.4.x,
glibc2.2 and gcc version 3.2
Consider this test case example:
-------------------------------------------------------
#include <vector>
#include <ext/hash_map>
using namespace __gnu_cxx;
void f1()
{
int n=3D42;
vector<int> v;
for(int i=3D0; i<1000000; i++) v.push_back(n);
}
int main(){
f1();
return 0;
}
----------------------------------------------------------
g++ -g -o test test.C=20
and then=20
valgrind --leak-resolution=3Dhigh --num-callers=3D20 --show-reachable=3Dy=
es --leak-check=3Dyes ./test
I will get the following summary:
----------------------------------------------------------
=3D=3D16880=3D=3D LEAK SUMMARY:
=3D=3D16880=3D=3D definitely lost: 16 bytes in 1 blocks.
=3D=3D16880=3D=3D possibly lost: 0 bytes in 0 blocks.
=3D=3D16880=3D=3D still reachable: 6912 bytes in 4 blocks.
----------------------------------------------------------
The number which troubles me ist the bytes that are still
reachable. Here's, the detail:
----------------------------------------------------------
=3D=3D19169=3D=3D 6848 bytes in 3 blocks are still reachable in loss reco=
rd 3 of 3
=3D=3D19169=3D=3D at 0x4015DE3B: __builtin_new (vg_clientfuncs.c:129)
=3D=3D19169=3D=3D by 0x4015DE76: operator new(unsigned) (vg_clientfunc=
s.c:142)
=3D=3D19169=3D=3D by 0x40278E00: std::__default_alloc_template<true, 0=
>::_S_chunk_alloc(unsigned, int&) (in /usr/lib/libstdc++.so.5.0.0)
=3D=3D19169=3D=3D by 0x40278D1C: std::__default_alloc_template<true, 0=
>::_S_refill(unsigned) (in /usr/lib/libstdc++.so.5.0.0)
=3D=3D19169=3D=3D by 0x402788EF: std::__default_alloc_template<true, 0=
>::allocate(unsigned) (in /usr/lib/libstdc++.so.5.0.0)
=3D=3D19169=3D=3D by 0x8049008: std::__simple_alloc<int, std::__defaul=
t_alloc_template<true, 0> >::allocate(unsigned) (/usr/include/g++/bits/st=
l_alloc.h:224)
=3D=3D19169=3D=3D by 0x8048D7E: std::_Vector_alloc_base<int, std::allo=
cator<int>, true>::_M_allocate(unsigned) (/usr/include/g++/bits/stl_vecto=
r.h:121)
=3D=3D19169=3D=3D by 0x8048A45: std::vector<int, std::allocator<int> >=
::_M_insert_aux(__gnu_cxx::__normal_iterator<int*, std::vector<int, std::=
allocator<int> > >, int const&) (/usr/include/g++/bits/stl_vector.h:898)
=3D=3D19169=3D=3D by 0x804884C: std::vector<int, std::allocator<int> >=
::push_back(int const&) (/usr/include/g++/bits/stl_vector.h:496)
=3D=3D19169=3D=3D by 0x80486A1: f1() (test2.C:10)
=3D=3D19169=3D=3D by 0x80487A2: main (test2.C:21)
=3D=3D19169=3D=3D by 0x403094A1: __libc_start_main (in /lib/libc.so.6)
=3D=3D19169=3D=3D by 0x8048580: (within /home/bach/work/assembly/htga/=
src/progs/test)
----------------------------------------------------------
Regarding the program above, I sincerely do think that there should be
no leak at all, even not in "reachable" parts.
Now, a few bytes don't hurt. Unfortunately, when I let run my real
program, here's what I get (for really small data sets):
----------------------------------------------------------
=3D=3D698=3D=3D LEAK SUMMARY:
=3D=3D698=3D=3D definitely lost: 24825 bytes in 3492 blocks.
=3D=3D698=3D=3D possibly lost: 1398 bytes in 3 blocks.
=3D=3D698=3D=3D still reachable: 1125492 bytes in 65 blocks.
----------------------------------------------------------
(please note that I don't care about the that the definitely and
possibly lost numbers, these I can trace back to real oversights in my
code.)
The "still reachable" 1M number is about 40 times greater than the
other two numbers added together and I have the distinct impression
that the memory is really eaten away somewhere:=20
1) all valgrind detail messages are more or less similar to the one of
the test case above, all have something to do with containers
2) putting a "while(1)" loop at a distinctive point in my program
where everything should have been more or less freed after some
heavy computation (using about any existing STL container type
that exists with dozens of different classes) gives me remaining
memory footprints of >1G (yes, that's gigabyte).
Now my question: any idea where to start searching? is valgrind at
fault (which I don't think, but one never knows)? the gnu STL? the gnu
g++ compiler?
Any suggestion welcome.=20
Regards,
Bastien
--=20
-- The universe has its own cure for stupidity. --
-- Unfortunately, it doesn't always apply it. --
|
|
From: Julian S. <js...@ac...> - 2003-04-09 07:45:22
|
On Wednesday 09 April 2003 7:17 am, Sefer Tov wrote: > Indeed, you were both correct. > I am running it on a laptop (compaq armada m300) with apm enabled. > > I must admit that this proves to be quite an annoyance, since I do most of > my work on a laptop. Does that affect the timing of other functions as > well? > > I'm unfailiar with these high resolution time counters in x86, but it > strikes me odd that Intel wouldn't provide an equivilant, reliable > mechanism for laptops as well (maybe in exchange of accuracy or slight > performance impact). > > I'm curious, is there any way around this? Try the patch called 09-rdtsc-calibration from http://www.goop.org/~jeremy/valgrind/ J > > Thanks in advance, > Sefer. > > > From: Julian Seward <js...@ac...> > > >To: Jeremy Fitzhardinge <je...@go...>, Sefer Tov <se...@ho...> > >CC: val...@li... > >Subject: Re: [Valgrind-users] Scheduler problem > >Date: Tue, 8 Apr 2003 21:24:24 +0000 > > > > > >Sefer, > > > >I tested the program you sent me (below) and it behaves > >identically running on V from normal; no timing anomalies. > >This is running on a 1133Mhz PIII-T (desktop) machine. > > > >I suspect Jeremy may be right about the power management thing; > >he's had a patch available for that for a while. Can you > >clarify the situation re power management on your platform? > >Thanks. > > > >J > > > >#include <pthread.h> > >#include <stdio.h> > >#include <unistd.h> > > > > > >void *start(void *p) > >{ > > printf("Hi!\n"); > > sleep(1); > > printf("Here\n"); > > sleep(1); > > > > return 0; > >} > > > > > >int main() > >{ > > pthread_t tid; > > void *p; > > int i; > > > > for ( i = 0; i < 5; ++i ) { > > pthread_create(&tid, 0, start, 0); > > } > > pthread_join(tid, &p); > > > > return 0; > >} > > > >On Tuesday 08 April 2003 10:12 am, Jeremy Fitzhardinge wrote: > > > Quoting Sefer Tov <se...@ho...>: > > > > Hi! > > > > > > > > I've been testing a short threaded program, and I noticed that > > > >sleep(x), > > > > > > although utilizes no cpu, it schdules poorly (the program slows down > > > > almost > > > > to a halt). > > > > > > Is your machine a laptop, or a desktop with power management enabled? > > > Valgrind uses the TSC register as its timebase, and I've noticed on my > > > laptop the TSC doesn't advance when the machine is idle. You can > > > easily tell if this is the case: if you run a CPU-bound program at the > > > same > > > >time, > > > > > then the TSC advances at near full speed and the sleeps are for the > > > >right > > > > > time. > > > > > > J > > _________________________________________________________________ > Add photos to your messages with MSN 8. Get 2 months FREE*. > http://join.msn.com/?page=features/featuredemail |
|
From: Sefer T. <se...@ho...> - 2003-04-09 07:18:16
|
Indeed, you were both correct.
I am running it on a laptop (compaq armada m300) with apm enabled.
I must admit that this proves to be quite an annoyance, since I do most of
my work on a laptop. Does that affect the timing of other functions as well?
I'm unfailiar with these high resolution time counters in x86, but it
strikes me odd that Intel wouldn't provide an equivilant, reliable mechanism
for laptops as well (maybe in exchange of accuracy or slight performance
impact).
I'm curious, is there any way around this?
Thanks in advance,
Sefer.
>From: Julian Seward <js...@ac...>
>To: Jeremy Fitzhardinge <je...@go...>, Sefer Tov <se...@ho...>
>CC: val...@li...
>Subject: Re: [Valgrind-users] Scheduler problem
>Date: Tue, 8 Apr 2003 21:24:24 +0000
>
>
>Sefer,
>
>I tested the program you sent me (below) and it behaves
>identically running on V from normal; no timing anomalies.
>This is running on a 1133Mhz PIII-T (desktop) machine.
>
>I suspect Jeremy may be right about the power management thing;
>he's had a patch available for that for a while. Can you
>clarify the situation re power management on your platform?
>Thanks.
>
>J
>
>#include <pthread.h>
>#include <stdio.h>
>#include <unistd.h>
>
>
>void *start(void *p)
>{
> printf("Hi!\n");
> sleep(1);
> printf("Here\n");
> sleep(1);
>
> return 0;
>}
>
>
>int main()
>{
> pthread_t tid;
> void *p;
> int i;
>
> for ( i = 0; i < 5; ++i ) {
> pthread_create(&tid, 0, start, 0);
> }
> pthread_join(tid, &p);
>
> return 0;
>}
>
>On Tuesday 08 April 2003 10:12 am, Jeremy Fitzhardinge wrote:
> > Quoting Sefer Tov <se...@ho...>:
> > > Hi!
> > >
> > > I've been testing a short threaded program, and I noticed that
>sleep(x),
> > >
> > > although utilizes no cpu, it schdules poorly (the program slows down
> > > almost
> > > to a halt).
> >
> > Is your machine a laptop, or a desktop with power management enabled?
> > Valgrind uses the TSC register as its timebase, and I've noticed on my
> > laptop the TSC doesn't advance when the machine is idle. You can easily
> > tell if this is the case: if you run a CPU-bound program at the same
>time,
> > then the TSC advances at near full speed and the sleeps are for the
>right
> > time.
> >
> > J
>
_________________________________________________________________
Add photos to your messages with MSN 8. Get 2 months FREE*.
http://join.msn.com/?page=features/featuredemail
|