Thread: [Libsigcx-main] Re: [sigc] Re: [gtkmm] libsigcx and gtkmm 2.4 (Page 2)
Status: Beta
Brought to you by:
rottmann
From: Martin S. <mar...@hi...> - 2004-07-16 09:23:12
|
Here is an interesting article related to this thread: http://en.wikipedia.org/wiki/Lock-free_and_wait-free_algorithms -- Martin Am 15.06.2004 09:58:41 schrieb(en) Martin Schulze: > Am 14.06.2004 19:34:41 schrieb(en) Christer Palm: >> Daniel Elstner wrote: >>> >>> Okay, you're (partly) right. ("Partly" because it's not "locking >>> or >>> unlocking": what's needed is unlock in thread A and lock in thread >>> B.) >>> I found this in Butenhof: >>> >>> Whatever memory values a thread can see when it unlocks a >>> mutex, >>> either directly or by waiting on a condition variable, can >>> also >>> be seen by any thread that later locks the same mutex. >>> Again, >>> data written after the mutex is unlocked may not >>> necessarily be >>> seen by the thread that locks the mutex, even if the write >>> occurs before the lock. >>> >>> In other words, the sequence >>> >>> pthread_mutex_lock(mutex); >>> pthread_mutex_unlock(mutex); >>> >>> issues a memory barrier instruction on the unlock. The other >>> thread >>> that wants to read the data still has to lock the same mutex >>> though. >>> >> >> A memory barrier, or synchronize, instruction is issued both on lock >> and unlock and also in a bunch of other thread related functions. Of >> course all threads need to agree on which mutex protects memory >> location X, that's how they make sure they doesn't execute a region >> of code that access memory location X simultaneously. Not because >> only certain memory locations are syncronized then the mutex is >> locked/unlocked. >> >> Having said that, is there any place in mine or Martins code where >> you believe that this rule isn't followed, except as a side effect >> of passing objects that contain internal references? >> >> >> This is what IEEE Std 1003.1-2004 has to say about memory >> synchronization requirements: >> >> 4.10 Memory Synchronization >> >> Applications shall ensure that access to any memory location by more >> than one thread of control (threads or processes) is restricted such >> that no thread of control can read or modify a memory location while >> another thread of control may be modifying it. Such access is >> restricted using functions that synchronize thread execution and >> also synchronize memory with respect to other threads. The following >> functions synchronize memory with respect to other threads: >> >> ... >> pthread_mutex_lock() >> ... >> pthread_mutex_unlock() >> ... > > This gives rise to an interesting question: If no locking is required > (e.g. because atomic operations are used), which is the most > efficient call to establish a memory barrier (e.g. before doing the > atomic operation)? In a linux driver, I would call wmb(), but what > can I do on the application side? Signal a dummy condition? > > Regards, > > Martin > _______________________________________________ > gtkmm-list mailing list > gtk...@gn... > http://mail.gnome.org/mailman/listinfo/gtkmm-list |
From: Martin S. <mar...@hi...> - 2004-06-14 17:48:54
|
Am 14.06.2004 01:18:56 schrieb(en) Daniel Elstner: > Am So, den 13.06.2004 um 23:47 Uhr +0200 schrieb Christer Palm: > > > 1. An string object is created. > > 2. The shared mutex is locked. > > 3. A shared copy of the string object is made. > > 4. The mutex is unlocked. > > 5. The original string object is destroyed. > > > > Now, the problem I see here is that the original string is > destroyed > > > after the mutex is unlocked. So if string isn't thread-safe this is > a > > problem. > > That's indeed another problem. Hm... after reading: http://gcc.gnu.org/onlinedocs/libstdc++/faq/#5_6 I still cannot answer the question whether string (in the example from above) _is_ thread-safe?! Regards, Martin > > But if fail to see how locking the mutex before creating the > original > > string would make any difference. Successfully locking or unlocking > a > > mutex is guaranteed to synchronize memory with respect to other > threads > > regardless of whether that memory was touched before or after the > mutex > > was locked. > > That's new to me. Do you have any reference on that? > > --Daniel > > > _______________________________________________ > gtkmm-list mailing list > gtk...@gn... > http://mail.gnome.org/mailman/listinfo/gtkmm-list > |
From: Daniel E. <dan...@gm...> - 2004-06-14 15:17:59
|
Am Mo, den 14.06.2004 um 10:07 Uhr +0200 schrieb Martin Schulze: > Hm... after reading: > > http://gcc.gnu.org/onlinedocs/libstdc++/faq/#5_6 > > I still cannot answer the question whether string (in the example from > above) _is_ thread-safe?! Well, std::string in libstdc++ uses atomic increment and decrement for the reference count. However, the problem is that you cannot rely on this since the standard doesn't say anything about it AFAIK. Also, I'm not sure if an atomic reference count is actually sufficient to always guarantee thread-safety... suppose you have: std::string a ("foo"); // in thread 1 std::string b (a); // in thread 2 b[2] = 'b'; // mutate to force copy (in thread 2) Now, atomic reference count or not, you have the situation that thread 2 reads memory written by thread 1 without any mutex locking. Well, unless atomic inc/dec also have the effect of a memory barrier, which would make them rather heavy... --Daniel |
From: Martin S. <mar...@hi...> - 2004-06-14 17:48:56
|
Am 13.06.2004 19:45:57 schrieb(en) Daniel Elstner: > Am So, den 13.06.2004 um 18:30 Uhr +0200 schrieb Martin Schulze: > > > > > > > knows of it. Note that sigc::bind does not take arguments as > > > references > > > > by default if this is where you are heading. > > > > > > std::string can be implemented with reference counting, and the > > > libstdc++ shipped with GCC does exactly that. > > > > Meaning that no deep copy of the string is made although it is > passed > > "by value"?! Then I understand the problem here. > > Exactly. > > > (However, if you pass a "const char*" into the std::string ctor as > in > > my example the copy is being created at once, isn't it?) > > Right. But you have to lock *before* creating the std::string > object! > > > I still don't see the problem in the case where no > references/pointers > > are being passed around: The list of slots the dispatcher operates > on > > _is_ protected by memory barriers (there might be bugs in my code > but > > it is perfectly possible to simply use a mutex around 'std::list:: > > push_back()' / 'std::list::pop_front()' as I pointed out in a > comment > > and as Christer does). > > Sure, but the caller passes in an already constructed std::string. > As > I > said above, you need to lock before constructing the object. Sorry, I'm a bit lame at the moment. I'm trying to express the problem in a simple flow table with a *bang* at time=t+4: time thread A should do (*does) thread B [*does] ------------------------------------------------------------------- t construct slot (*slot is partly constructed) t+2 acquire lock (*lock is acquired) t+3 add slot to list t+3 release lock (*slot is added to list) (*lock is released) t+4 idle [*accesses slot] t+5 (*construction of slot is finished) Is this a possible scenario? I can't think properly about it at the moment - too tired. Note that slot is copied again while adding to the list during std::list::push_back(). I would assume that this copy is fully initialized before the lock is actually released. Of course in the case of our std::string this is still a problem because the newly constructed string is a shallow copy of the old one. But for static data types everything should be all right, shouldn't it? Regards, Martin |
From: Daniel E. <dan...@gm...> - 2004-06-14 15:21:37
|
Am So, den 13.06.2004 um 22:18 Uhr +0200 schrieb Martin Schulze: > time thread A should do (*does) thread B [*does] > ------------------------------------------------------------------- > t construct slot > (*slot is partly constructed) > t+2 acquire lock > (*lock is acquired) > t+3 add slot to list > t+3 release lock > (*slot is added to list) > (*lock is released) > t+4 idle [*accesses slot] > t+5 (*construction of slot is finished) > > > Is this a possible scenario? I can't think properly about it at the > moment - too tired. Note that slot is copied again while adding to the > list during std::list::push_back(). I would assume that this copy is > fully initialized before the lock is actually released. Of course in > the case of our std::string this is still a problem because the newly > constructed string is a shallow copy of the old one. But for static > data types everything should be all right, shouldn't it? Yes. But at least in libsigc++ 1.2, slots were reference-counted. Has this changed in 2.0? --Daniel |
From: Martin S. <mar...@hi...> - 2004-06-14 17:48:52
|
Am 14.06.2004 01:20:26 schrieb(en) Daniel Elstner: > Am So, den 13.06.2004 um 22:18 Uhr +0200 schrieb Martin Schulze: > > > time thread A should do (*does) thread B [*does] > > ------------------------------------------------------------------- > > t construct slot > > (*slot is partly constructed) > > t+2 acquire lock > > (*lock is acquired) > > t+3 add slot to list > > t+3 release lock > > (*slot is added to list) > > (*lock is released) > > t+4 idle [*accesses slot] > > t+5 (*construction of slot is finished) > > > > > > Is this a possible scenario? I can't think properly about it at the > > > moment - too tired. Note that slot is copied again while adding to > the > > list during std::list::push_back(). I would assume that this copy > is > > > fully initialized before the lock is actually released. Of course > in > > > the case of our std::string this is still a problem because the > newly > > constructed string is a shallow copy of the old one. But for static > > > data types everything should be all right, shouldn't it? > > Yes. But at least in libsigc++ 1.2, slots were reference-counted. > Has > this changed in 2.0? Yes, this has changed. Reference-counting of slots made signal emission very complex and had very little benefits. Apart from the template clutter which make it hard to read, the internals of libsigc++ 2.0 are much more simple than in libsigc++ 1.2! Cheers, Martin |
From: Timothy M. S. <ts...@k-...> - 2004-06-13 19:47:52
|
Christer Palm wrote: > Not only do you need to know how to serialize the object, but you also > need the code to do it. And if you don't have it, apart from actually > writing it - where would it go? It should, and may have to go into the > classes themselves, because of OO principles and the potential need to > access private members. > > As much as I would like to have that in C++, it just isn't there. It > seems to me that attempting to fix that in Glib just so that you could > do cross-tread signalling is just way over the top. Can't comment on the threading-specific issues raised in this argument, but I do have to correct this common misperception that C++ doesn't have serialization - it most certainly does, using the OO, type-safe iostreams interface. See boost::lexical_cast for a trivial-but-effective tool for putting those serialization capabilities to work. Tim Shead |
From: Christer P. <pa...@no...> - 2004-06-14 17:52:13
|
Timothy M. Shead wrote: > > Can't comment on the threading-specific issues raised in this argument, > but I do have to correct this common misperception that C++ doesn't have > serialization - it most certainly does, using the OO, type-safe > iostreams interface. See boost::lexical_cast for a > trivial-but-effective tool for putting those serialization capabilities > to work. > Sorry, but I meant a useful form of serialization... Something that can serialize pretty much any object or at least works with the classes provided by the standard library. Beeing able to deserialize the result would also be nice. -- Christer Palm |
From: Daniel E. <dan...@gm...> - 2004-06-14 17:59:05
|
Am So, den 13.06.2004 um 12:47 Uhr -0700 schrieb Timothy M. Shead: > Can't comment on the threading-specific issues raised in this argument, > but I do have to correct this common misperception that C++ doesn't have > serialization - it most certainly does, using the OO, type-safe > iostreams interface. See boost::lexical_cast for a > trivial-but-effective tool for putting those serialization capabilities > to work. Cool, didn't think about that. Although this would still require operator<< and operator>> to be implemented, it's a much nicer interface. The only problem would be that it's unnecessarily inefficient for data types that can simply be copied "as is" without converting to a text representation and back again. But perhaps that's acceptable. --Daniel |
From: Christer P. <pa...@no...> - 2004-06-14 12:29:05
|
Daniel Elstner wrote: > > The problem is you need to lock before the data is being written. And > mutexes don't ensure that event A happens after event B. Mutexes ensure > that read/write A and read/write B don't happen at the same time, _and_ > they issue memory barrier instructions to ensure memory visibility. > Hmm. Perhaps it would be better if you took a look at the code and told me where the problem is? > > Remember, we're talking about thread synchronization. This is not > something to be taken lightly. Also note that nobody said that the > locked data has to be global; you can easily store it in an object > somewhere appropriate, or even put it into a queue. > I'm not taking it lightly. My point was that although you could technically accomplish the same thing without arguments, arguments is a pretty useful feature. If you have similar code in perhaps hundreds of places doing similar things, then it would make sense to attempt to make a generic implementation of that pattern. Which is exactly what this is about. -- Christer Palm |
From: Daniel E. <dan...@gm...> - 2004-06-14 15:35:24
|
Am So, den 13.06.2004 um 21:34 Uhr +0200 schrieb Christer Palm: > Hmm. Perhaps it would be better if you took a look at the code and told > me where the problem is? Sorry, just didn't have the time to do that. I consider this to be a conceptional problem though, not specific to your code. Do tell me if I'm missing something. > I'm not taking it lightly. My point was that although you could > technically accomplish the same thing without arguments, arguments is a > pretty useful feature. Agreed. > If you have similar code in perhaps hundreds of places doing similar > things, then it would make sense to attempt to make a generic > implementation of that pattern. Which is exactly what this is about. Indeed. The question is if we can write a generic implementation that actually works. --Daniel |