|
From: <erg...@pc...> - 2005-03-30 23:12:15
|
Ok, I'm baffled. Here's some sample output: ==25143== Thread 3: ==25143== Possible data race writing variable at 0x1D80AD28 ==25143== at 0x1D5D45EC: std::__default_alloc_template<true, 0>::deallocate(void*, unsigned) (in /usr/lib/libstdc++.so.5.0.5) ==25143== by 0x804E1DB: std::__simple_alloc<CatObject*, std::__default_alloc_template<true, 0> >::deallocate(CatObject**, unsigned) (stl_alloc.h:242) ==25143== by 0x804DBDB: std::_Vector_alloc_base<CatObject*, std::allocator<CatObject*>, true>::_M_deallocate(CatObject**, unsigned) (stl_vector.h:130) ==25143== by 0x804DBC2: std::_Vector_base<CatObject*, std::allocator<CatObject*> >::~_Vector_base() (stl_vector.h:162) ==25143== by 0x804D28F: std::vector<CatObject*, std::allocator<CatObject*> >::~vector() (stl_vector.h:297) ==25143== by 0x806A692: sieve(void*) (sieve.cc:219) ==25143== by 0x8054302: thread_pool_loop(void*) (pool.cc:27) ==25143== by 0x1D4B1A90: thread_wrapper (vg_libpthread.c:867) ==25143== by 0xB000EFED: do__quit (vg_scheduler.c:1872) ==25143== Address 0x1D80AD28 is 744 bytes inside a block of size 1304 alloc'd by thread 2 ==25143== at 0x1D4A93BB: operator new(unsigned) (vg_replace_malloc.c:133) ==25143== by 0x1D5D49E8: std::__default_alloc_template<true, 0>::_S_chunk_alloc(unsigned, int&) (in /usr/lib/libstdc++.so.5.0.5) ==25143== by 0x1D5D48DC: std::__default_alloc_template<true, 0>::_S_refill(unsigned) (in /usr/lib/libstdc++.so.5.0.5) ==25143== by 0x1D5D44D1: std::__default_alloc_template<true, 0>::allocate(unsigned) (in /usr/lib/libstdc++.so.5.0.5) ==25143== by 0x8052209: std::__simple_alloc<CatObject*, std::__default_alloc_template<true, 0> >::allocate(unsigned) (stl_alloc.h:232) ==25143== by 0x80521E2: std::_Vector_alloc_base<CatObject*, std::allocator<CatObject*>, true>::_M_allocate(unsigned) (stl_vector.h:127) ==25143== by 0x806E693: void std::vector<CatObject*, std::allocator<CatObject*> >::_M_range_insert<__gnu_cxx::__normal_iterator<CatObject**, std::vector<CatObject*, std::allocator<CatObject*> > > >(__gnu_cxx::__normal_iterator<CatObject**, std::vector<CatObject*, std::allocator<CatObject*> > >, __gnu_cxx::__normal_iterator<CatObject**, std::vector<CatObject*, std::allocator<CatObject*> > >, __gnu_cxx::__normal_iterator<CatObject**, std::vector<CatObject*, std::allocator<CatObject*> > >, std::forward_iterator_tag) (vect ==25143== by 0x806E3D2: void std::vector<CatObject*, std::allocator<CatObject*> >::_M_insert_dispatch<__gnu_cxx::__normal_iterator<CatObject**, std::vector<CatObject*, std::allocator<CatObject*> > > >(__gnu_cxx::__normal_iterator<CatObject**, std::vector<CatObject*, std::allocator<CatObject*> > >, __gnu_cxx::__normal_iterator<CatObject**, std::vector<CatObject*, std::allocator<CatObject*> > >, __gnu_cxx::__normal_iterator<CatObject**, std::vector<CatObject*, std::allocator<CatObject*> > >, __false_type) (stl_vector.h:8 ==25143== by 0x806E3AA: void std::vector<CatObject*, std::allocator<CatObject*> >::insert<__gnu_cxx::__normal_iterator<CatObject**, std::vector<CatObject*, std::allocator<CatObject*> > > >(__gnu_cxx::__normal_iterator<CatObject**, std::vector<CatObject*, std::allocator<CatObject*> > >, __gnu_cxx::__normal_iterator<CatObject**, std::vector<CatObject*, std::allocator<CatObject*> > >, __gnu_cxx::__normal_iterator<CatObject**, std::vector<CatObject*, std::allocator<CatObject*> > >) (stl_vector.h:693) ==25143== by 0x806A394: sieve(void*) (sieve.cc:189) ==25143== by 0x8054302: thread_pool_loop(void*) (pool.cc:27) ==25143== by 0x1D4B1A90: thread_wrapper (vg_libpthread.c:867) ==25143== by 0xB000EFED: do__quit (vg_scheduler.c:1872) ==25143== Previous state: shared RW, locked by:0x80C3A70(sortmap_mutex) ==25143== So my interpretation is that the problem is at sieve.cc:219 Here it is - delete [] local_sortmap; Note the term *local*! This variable is declared locally in this function - no way it can be shared between threads! Other threads are running this same function simultaneously, but with different input data - and as I said, this variable is declared local to the function, so every thread should have its own independent copy, right? Here is what the declaration looks like: vector<CatObject*> *local_sortmap; local_sortmap = new vector<CatObject*>[olist.size()]; It says that the memory in question was allocated at sieve.cc:189 Here it is - local_sortmap[(*itr1)->index()].insert(local_sortmap[(*itr1)->index()].end(), itr1 + 1, itr2); itr1 & itr2 are iterators into another vector<CatObject*>, which is global, but I have verified each thread is working off its own version of this. Any ideas??? Is helgrind finding a data race in the STL? Thanks Eric FWIW, this is a SuSe box, dual HT Xeons uname -ra Linux neuromancer 2.4.21-273-smp4G #1 SMP Mon Jan 17 13:19:07 UTC 2005 i686 i686 i386 GNU/Linux gcc-c++-3.3.1-24 libstdc++-3.3.1-24 valgrind --version valgrind-2.4.0 Another wierd bit - I was previously getting a data race error a few lines up where I copy local_sortmap to a global vector<CatObject*> (which again, is only accessed by a single thread). I wrapped a mutex around it anyway, just to see what happened, and now I get this. --------------------------------------------- This message was sent using Endymion MailMan. http://www.endymion.com/products/mailman/ |
|
From: Arndt M. <amu...@is...> - 2005-03-31 07:41:18
Attachments:
amuehlen.vcf
|
erg...@pc... wrote:
>Ok, I'm baffled.
>
>
>
[snip]
So was I, when I first noticed the overwhelming amount of error messages
reported by helgrind inside the Standard C++ lib.
The confusion of helgrind is due to the memory allocation strategy of
STL containers that reuses freed memory chunks for other containers and
therefore makes the eraser algorithm detect a possible data race.
Fortunately you can disable this feature by simply setting an
environment variable before running your program:
export GLIBCPP_FORCE_NEW=1
valgrind --tool=helgrind ...
Hth,
Arndt
|
|
From: Jeremy F. <je...@go...> - 2005-03-31 21:25:45
|
Arndt Muehlenfeld wrote:
> So was I, when I first noticed the overwhelming amount of error
> messages reported by helgrind inside the Standard C++ lib.
> The confusion of helgrind is due to the memory allocation strategy of
> STL containers that reuses freed memory chunks for other containers
> and therefore makes the eraser algorithm detect a possible data race.
Yep, that's what I was going to say. helgrind is confused because it
doesn't know the memory has been recycled, so it assumes the new use is
a continuation of the old use.
> Fortunately you can disable this feature by simply setting an
> environment variable before running your program:
>
> export GLIBCPP_FORCE_NEW=1
> valgrind --tool=helgrind ...
That should be in the FAQ when we resurrect helgrind.
J
|
|
From: Eric G. <erg...@pc...> - 2005-04-01 04:16:44
|
Ok, interesting - I'll give that environment variable a try. Did I mention that when it crashes under gdb, this is where it ends up pointing to as well? And it doesn't crash if I run it serially? Thanks Eric Jeremy Fitzhardinge wrote: >Arndt Muehlenfeld wrote: > > > >>So was I, when I first noticed the overwhelming amount of error >>messages reported by helgrind inside the Standard C++ lib. >>The confusion of helgrind is due to the memory allocation strategy of >>STL containers that reuses freed memory chunks for other containers >>and therefore makes the eraser algorithm detect a possible data race. >> >> > >Yep, that's what I was going to say. helgrind is confused because it >doesn't know the memory has been recycled, so it assumes the new use is >a continuation of the old use. > > > >>Fortunately you can disable this feature by simply setting an >>environment variable before running your program: >> >> export GLIBCPP_FORCE_NEW=1 >> valgrind --tool=helgrind ... >> >> > >That should be in the FAQ when we resurrect helgrind. > > J > > > >------------------------------------------------------- >This SF.net email is sponsored by Demarc: >A global provider of Threat Management Solutions. >Download our HomeAdmin security software for free today! >http://www.demarc.com/Info/Sentarus/hamr30 >_______________________________________________ >Valgrind-users mailing list >Val...@li... >https://lists.sourceforge.net/lists/listinfo/valgrind-users > > > > |