Thread: [Libsigcx-main] libsigcx and gtkmm 2.4
Status: Beta
Brought to you by:
rottmann
From: Christer P. <pa...@no...> - 2004-05-30 12:21:16
|
Hi! I'm (finally) moving over to gtkmm 2.4, but I can't find a libSigCX package to match sigc++-2.0 used in 2.4. It appears that it only supports sigc++-1.2. I tried to simply tweak configure.ac to make it build against the sigc++-2.0 package, but it doesn't compile, unfortunately. Does anyone have any updates on the status of libSigCX or otherwise have any experience that could help me out? -- Christer Palm |
From: Martin S. <mar...@hi...> - 2004-05-30 13:05:30
|
Am 2004.05.30 14:21 schrieb(en) Christer Palm: > Hi! >=20 > I'm (finally) moving over to gtkmm 2.4, but I can't find a libSigCX > package to match sigc++-2.0 used in 2.4. It appears that it only > supports sigc++-1.2. That's right. I think libSigCX is being maintained by Andreas Rottmann. I didn't hear of any plans to port it to sigc++-2.0 > I tried to simply tweak configure.ac to make it build against the > sigc++-2.0 package, but it doesn't compile, unfortunately. libSigCX depends on internal structures of sigc++-1.2. Since sigc++-2.0 has been written from scratch the internal structure is completely different. > Does anyone have any updates on the status of libSigCX or otherwise > have any experience that could help me out? Which classes from libSigCX do you need? At some time I had some random thoughts about libSigCX & sigc++-2.0 - Some things should be much easier to implement with the new API. Regards, Martin |
From: Andreas R. <a.r...@gm...> - 2004-05-30 16:29:59
|
Martin Schulze <mar...@hi...> writes: > Am 2004.05.30 14:21 schrieb(en) Christer Palm: >> Hi! >> >> I'm (finally) moving over to gtkmm 2.4, but I can't find a libSigCX >> package to match sigc++-2.0 used in 2.4. It appears that it only >> supports sigc++-1.2. > > That's right. I think libSigCX is being maintained by Andreas Rottmann. > I didn't hear of any plans to port it to sigc++-2.0 > Right, I've mostly stalled development of libsigcx, but with a helping hand, I might invest a bit of effort to make it work on sigc++ 2.0. (I mainly waited to see if there would be any demand at all :). >> I tried to simply tweak configure.ac to make it build against the >> sigc++-2.0 package, but it doesn't compile, unfortunately. > > libSigCX depends on internal structures of sigc++-1.2. Since sigc++-2.0 > has been written from scratch the internal structure is completely > different. > >> Does anyone have any updates on the status of libSigCX or otherwise >> have any experience that could help me out? > > Which classes from libSigCX do you need? At some time I had some > random thoughts about libSigCX & sigc++-2.0 - Some things should be > much easier to implement with the new API. > I've thought about moving the sigcx functionality into Yehia[0] when switching to sigc++ 2.0. [0] http://ucxx.sf.net Andy -- Andreas Rottmann | Rotty@ICQ | 118634484@ICQ | a.r...@gm... http://yi.org/rotty | GnuPG Key: http://yi.org/rotty/gpg.asc Fingerprint | DFB4 4EB4 78A4 5EEE 6219 F228 F92F CFC5 01FD 5B62 To iterate is human; to recurse, divine. |
From: Christer P. <pa...@no...> - 2004-05-30 17:42:19
|
Hi Martin! Martin Schulze wrote: > > Which classes from libSigCX do you need? At some time I had some > random thoughts about libSigCX & sigc++-2.0 - Some things should be > much easier to implement with the new API. > My app depends rather heavily on SigCX::GtkDispatcher and SigCX::ThreadTunnel to do cross-thread signalling. I also pass around a lot of signal arguments by reference, so I also depend on beeing able to have fully synchronous signals. Perhaps it would be a good idea to extend the existing Glib::Dispatcher mechanism to provide the functionality of SigCX? IMO, Glib::Dispatcher is rather useless in its current form, and yet, cross-thread signalling is definitely one of the, if not _the_, most convenient and elegant way of doing multithreaded gtkmm programming... -- Christer Palm |
From: Martin S. <mar...@hi...> - 2004-05-31 14:44:23
Attachments:
gtkmm_dispatcher.tar.gz
|
Am 2004.05.30 19:42 schrieb(en) Christer Palm: > Hi Martin! >=20 > Martin Schulze wrote: >>=20 >> Which classes from libSigCX do you need? At some time I had some >> random thoughts about libSigCX & sigc++-2.0 - Some things should be >> much easier to implement with the new API. >> >=20 > My app depends rather heavily on SigCX::GtkDispatcher and SigCX::=20 > ThreadTunnel to do cross-thread signalling. I also pass around a lot =20 > of signal arguments by reference, so I also depend on beeing able to =20 > have fully synchronous signals. I've just updated a dispatcher class I've been using for libsigc++ 2.0. It's based on sigcx but only implements a subset of the functionality. Maybe it can serve as a starting point for the port or you can just use =20 it. I can't test it myself because I still haven't updated to gtkmm-2.4 (low bandwidth connection :( ). Attached is a tarball that also includes a simple (single-threaded) test program to illustrate the usage: void foo(std::string text) { std::cout << text.c_str() << std::endl; } int main(int argc, char *argv[]) { Glib::thread_init(); Gtk::Main myApp(argc, argv); sigc::GtkmmDispatcher disp; disp.tunnel(sigc::bind(&foo, std::string("hello world"))); /* - call this function from any thread; - bind up to 7 arguments at once - pass 'true' as a second argument if the slot should be executed synchronously; - make use of signal::make_slot() if the dispatcher should emit a signal; */ } The dispatcher tunnels slots across threads. It uses a Glib::Dispatcher internally to trigger their execution. For performance reasons an extra =20 list class is included to store the slots: 'ringbuffer' - a fixed-size =20 list that supports multithreaded addition/removal of objects without =20 any locks. (It can be replaced by a std::list and a mutex easily.) To pass return values across threads an additional adaptor class would be needed. Regards, Martin > Perhaps it would be a good idea to extend the existing Glib::=20 > Dispatcher mechanism to provide the functionality of SigCX? IMO, =20 > Glib::Dispatcher is rather useless in its current form, and yet, =20 > cross-thread signalling is definitely one of the, if not _the_, most =20 > convenient and elegant way of doing multithreaded gtkmm =20 > programming... >=20 > -- > Christer Palm |
From: Christer P. <pa...@no...> - 2004-06-14 13:02:26
Attachments:
dispatcher.tar.gz
|
Martin Schulze wrote: > > I've just updated a dispatcher class I've been using for libsigc++ 2.0. > It's based on sigcx but only implements a subset of the functionality. > Maybe it can serve as a starting point for the port or you can just use > it. I can't test it myself because I still haven't updated to gtkmm-2.4 > (low bandwidth connection :( ). Attached is a tarball that also > includes a simple (single-threaded) test program to illustrate the > usage: > Thanks Martin! I "stole" the basic principles from your class to create my own variant of a multi-argument Gtk::Dispatcher. As I think this is some pretty useful stuff, I'm posting the result here. Perhaps this could be adapted to go into the Glibmm distribution? Let me all know what you think. -- Christer Palm |
From: Daniel E. <dan...@gm...> - 2004-06-14 14:10:27
|
Am Sa, den 12.06.2004 um 15:19 Uhr +0200 schrieb Christer Palm: > Thanks Martin! > > I "stole" the basic principles from your class to create my own variant > of a multi-argument Gtk::Dispatcher. As I think this is some pretty > useful stuff, I'm posting the result here. > > Perhaps this could be adapted to go into the Glibmm distribution? > Let me all know what you think. The problem is that I'm still not convinced that cross-thread signalling with arguments can be implemented correctly as a generic template. From Martin's description of the SigCX solution it appears that a so-called synchronous mode is used to get around that: disp.tunnel(sigc::bind(&foo, std::string("hello world"))); /* - call this function from any thread; - bind up to 7 arguments at once - pass 'true' as a second argument if the slot should be executed synchronously; - make use of signal::make_slot() if the dispatcher should emit a signal; */ Hmm std::string is a perfect example of an argument type that requires special handling. So assuming "synchronously" does what I think it does, why doesn't the example use this mode? How does this synchronous thing work? Does it really wait for the main loop to process the signal, and wouldn't that defeat the purpose of threads? Even if it does this you still need mutex locking to protect the memory being shared (ensuring that event A happens after event B is not enough due to the way modern hardware works; you definitely need memory barriers too). Also, you can always use plain Glib::Dispatcher in conjunction with a Glib::Mutex to pass data around. This way you're forced to think about the locking which is IMHO a good thing. When I thought about adding signal parameter support to Glib::Dispatcher a while ago, I played with the idea of making Glib::Dispatcher provide a low-level interface for sending raw blocks of memory through the pipe. On top of that there could be some kind of plugin interface that requires you to implement two functions that convert your data to raw memory and back again. I don't think it should be entirely automagic through templates since that'd give the user a false sense of security. The big advantage of course is avoidance of any locking whatsoever. Comments, corrections, insights, etc. will be appreciated. Cheers, --Daniel |
From: Christer P. <pa...@no...> - 2004-06-14 12:11:30
|
A few comments in addition to what Martin already wrote... Daniel Elstner wrote: > > How does this synchronous thing work? Does it really wait for the main > loop to process the signal, and wouldn't that defeat the purpose of > threads? It's just a convenient form of locking for those apps that wants/needs it. Usually, it is perfectly acceptable to block until the signal handler returns. It definitely doesn't defeat the purpose of threads. > Even if it does this you still need mutex locking to protect > the memory being shared (ensuring that event A happens after event B is > not enough due to the way modern hardware works; you definitely need > memory barriers too). > Synchronous signals _does_ use a mutex behind the scenes to implement the locking. Not that, AFAIK, mutexes does anything more than ensuring that event A happens after event B. > Also, you can always use plain Glib::Dispatcher in conjunction with a > Glib::Mutex to pass data around. This way you're forced to think about > the locking which is IMHO a good thing. > IMHO, this is like saying that function arguments are unnecessary, because you could always use global variables to pass data around. > When I thought about adding signal parameter support to Glib::Dispatcher > a while ago, I played with the idea of making Glib::Dispatcher provide a > low-level interface for sending raw blocks of memory through the pipe. > On top of that there could be some kind of plugin interface that > requires you to implement two functions that convert your data to raw > memory and back again. Isn't this exactly what CORBA, for example, is all about? While it > I don't think it should be entirely automagic > through templates since that'd give the user a false sense of security. > The big advantage of course is avoidance of any locking whatsoever. > > Comments, corrections, insights, etc. will be appreciated. > > Cheers, > --Daniel > > > _______________________________________________ > gtkmm-list mailing list > gtk...@gn... > http://mail.gnome.org/mailman/listinfo/gtkmm-list > |
From: Christer P. <pa...@no...> - 2004-06-14 12:06:11
|
Christer Palm wrote: > A few comments in addition to what Martin already wrote... > > Daniel Elstner wrote: > >> When I thought about adding signal parameter support to Glib::Dispatcher >> a while ago, I played with the idea of making Glib::Dispatcher provide a >> low-level interface for sending raw blocks of memory through the pipe. >> On top of that there could be some kind of plugin interface that >> requires you to implement two functions that convert your data to raw >> memory and back again. > > > Isn't this exactly what CORBA, for example, is all about? > While it > Whoops, it got chopped off... Here we go again. While I think that this would be great for a inter-process or inter-network communication mechanism, I don't think it's a very good idea for inter-thread communication. Serializing/deserializing is usually very inefficient and is also extremely hard to do in a C++ environment. You'd need to know how to marshal each and every class in the objects type as well as containment hiearchy. Leaving all this to the implementor of the top-level class will definitely break the basic OO principles, and will be bound to be very error-prone. -- Christer Palm |
From: Daniel E. <dan...@gm...> - 2004-06-14 15:43:56
|
Am So, den 13.06.2004 um 12:57 Uhr +0200 schrieb Christer Palm: > > Daniel Elstner wrote: > > > >> When I thought about adding signal parameter support to Glib::Dispatcher > >> a while ago, I played with the idea of making Glib::Dispatcher provide a > >> low-level interface for sending raw blocks of memory through the pipe. > >> On top of that there could be some kind of plugin interface that > >> requires you to implement two functions that convert your data to raw > >> memory and back again. > > > > > > Isn't this exactly what CORBA, for example, is all about? > > While it > > > > Whoops, it got chopped off... Here we go again. > > While I think that this would be great for a inter-process or > inter-network communication mechanism, I don't think it's a very good > idea for inter-thread communication. > > Serializing/deserializing is usually very inefficient and is also > extremely hard to do in a C++ environment. You'd need to know how to > marshal each and every class in the objects type as well as containment > hiearchy. Leaving all this to the implementor of the top-level class > will definitely break the basic OO principles, and will be bound to be > very error-prone. Wrong. Locking is inefficient. Serializing is much faster unless you're talking about serializing a whole file or something. And right, you do need to know how to serialize each and every object you use. Just as you'd need to know details about the implementation of a class, such as "does it use reference counting internally?" before you can be sure that simply locking a mutex actually works. --Daniel |
From: Christer P. <pa...@no...> - 2004-06-14 12:30:11
|
Daniel Elstner wrote: > Am So, den 13.06.2004 um 12:57 Uhr +0200 schrieb Christer Palm: >> >>Serializing/deserializing is usually very inefficient and is also >>extremely hard to do in a C++ environment. You'd need to know how to >>marshal each and every class in the objects type as well as containment >>hiearchy. Leaving all this to the implementor of the top-level class >>will definitely break the basic OO principles, and will be bound to be >>very error-prone. > > > Wrong. Locking is inefficient. Serializing is much faster unless > you're talking about serializing a whole file or something. I would say that this is quite dependent on the locking scheme, lock contention potential, lock wait time, the complexity of the object beeing serialized and whether you are talking of efficiency in terms of lead time, consumption of CPU cycles or code size. But I happily stand corrected if you could back that claim up. > And right, > you do need to know how to serialize each and every object you use. > Just as you'd need to know details about the implementation of a class, > such as "does it use reference counting internally?" before you can be > sure that simply locking a mutex actually works. > Not only do you need to know how to serialize the object, but you also need the code to do it. And if you don't have it, apart from actually writing it - where would it go? It should, and may have to go into the classes themselves, because of OO principles and the potential need to access private members. As much as I would like to have that in C++, it just isn't there. It seems to me that attempting to fix that in Glib just so that you could do cross-tread signalling is just way over the top. I do agree that passing or sharing objects safely between threads, or indeed, just making a copy of an object has similar limitations in C++. However, we're not trying to solve that problem. The user _would_ still have to take necessary precautions. If your stance is that you'd rather avoid doing something altogether if there's no way of making it foolproof, then so be it. -- Christer |
From: Daniel E. <dan...@gm...> - 2004-06-14 15:29:06
|
Am So, den 13.06.2004 um 21:23 Uhr +0200 schrieb Christer Palm: > I would say that this is quite dependent on the locking scheme, lock > contention potential, lock wait time, the complexity of the object > beeing serialized and whether you are talking of efficiency in terms of > lead time, consumption of CPU cycles or code size. But I happily stand > corrected if you could back that claim up. Granted, it depends on the situation. My opinion is mostly based on experience by others who work on realtime critical applications, such as Paul Davis' audio stuff. > I do agree that passing or sharing objects safely between threads, or > indeed, just making a copy of an object has similar limitations in C++. > However, we're not trying to solve that problem. The user _would_ still > have to take necessary precautions. > > If your stance is that you'd rather avoid doing something altogether if > there's no way of making it foolproof, then so be it. Nothing is foolproof. My point is that it's just too easy to use e.g. std::string objects without thinking about the consequences. --Daniel |
From: Daniel E. <dan...@gm...> - 2004-06-14 15:39:57
|
Am So, den 13.06.2004 um 11:39 Uhr +0200 schrieb Christer Palm: > > Even if it does this you still need mutex locking to protect > > the memory being shared (ensuring that event A happens after event B is > > not enough due to the way modern hardware works; you definitely need > > memory barriers too). > > > > Synchronous signals _does_ use a mutex behind the scenes to implement > the locking. Not that, AFAIK, mutexes does anything more than ensuring > that event A happens after event B. The problem is you need to lock before the data is being written. And mutexes don't ensure that event A happens after event B. Mutexes ensure that read/write A and read/write B don't happen at the same time, _and_ they issue memory barrier instructions to ensure memory visibility. > > Also, you can always use plain Glib::Dispatcher in conjunction with a > > Glib::Mutex to pass data around. This way you're forced to think about > > the locking which is IMHO a good thing. > > > > IMHO, this is like saying that function arguments are unnecessary, > because you could always use global variables to pass data around. Remember, we're talking about thread synchronization. This is not something to be taken lightly. Also note that nobody said that the locked data has to be global; you can easily store it in an object somewhere appropriate, or even put it into a queue. --Daniel |
From: Martin S. <mar...@hi...> - 2004-06-14 17:49:01
|
Am 13.06.2004 00:03:36 schrieb(en) Daniel Elstner: > Am Sa, den 12.06.2004 um 15:19 Uhr +0200 schrieb Christer Palm: > > > Thanks Martin! > > > > I "stole" the basic principles from your class to create my own > variant > > of a multi-argument Gtk::Dispatcher. As I think this is some pretty > > useful stuff, I'm posting the result here. > > > > Perhaps this could be adapted to go into the Glibmm distribution? > > Let me all know what you think. > > The problem is that I'm still not convinced that cross-thread > signalling > with arguments can be implemented correctly as a generic template. > From > Martin's description of the SigCX solution it appears that a so- > called > synchronous mode is used to get around that: The synchronous mode can optionally be used if the execution of the calling thread should be suspended until the dispatcher has handled the signal. It is not intended to be used as a work-around; asynchronous mode should always work. > > disp.tunnel(sigc::bind(&foo, std::string("hello world"))); > /* > - call this function from any thread; > - bind up to 7 arguments at once > - pass 'true' as a second argument if the slot should > be executed synchronously; > - make use of signal::make_slot() if the dispatcher > should emit a signal; > */ > > Hmm std::string is a perfect example of an argument type that > requires > special handling. Why? The slot object is completely initialized before the dispatcher knows of it. Note that sigc::bind does not take arguments as references by default if this is where you are heading. > So assuming "synchronously" does what I think it > does, why doesn't the example use this mode? > > How does this synchronous thing work? Does it really wait for the > main > loop to process the signal, and wouldn't that defeat the purpose of > threads? Yes it does, and well, I have no use the synchronous mode currently. > Even if it does this you still need mutex locking to protect > the memory being shared (ensuring that event A happens after event B > is > not enough due to the way modern hardware works; you definitely need > memory barriers too). Why would you need memory barries? Thread A creates some objects, thread B (the dispatcher) uses them and destroys them afterwards. Of course, if you pass references around, you need to make sure that thread A doesn't manipulate the data while thread B is handling it, yourself. Regards, Martin > > Also, you can always use plain Glib::Dispatcher in conjunction with a > Glib::Mutex to pass data around. This way you're forced to think > about > the locking which is IMHO a good thing. > > When I thought about adding signal parameter support to > Glib::Dispatcher > a while ago, I played with the idea of making Glib::Dispatcher > provide > a > low-level interface for sending raw blocks of memory through the > pipe. > On top of that there could be some kind of plugin interface that > requires you to implement two functions that convert your data to raw > memory and back again. I don't think it should be entirely automagic > through templates since that'd give the user a false sense of > security. > The big advantage of course is avoidance of any locking whatsoever. > > Comments, corrections, insights, etc. will be appreciated. > > Cheers, > --Daniel > > > |
From: Daniel E. <dan...@gm...> - 2004-06-14 15:34:19
|
Am So, den 13.06.2004 um 2:10 Uhr +0200 schrieb Martin Schulze: > > > > Hmm std::string is a perfect example of an argument type that > > requires > > special handling. > > Why? The slot object is completely initialized before the dispatcher > knows of it. Note that sigc::bind does not take arguments as references > by default if this is where you are heading. std::string can be implemented with reference counting, and the libstdc++ shipped with GCC does exactly that. > > Even if it does this you still need mutex locking to protect > > the memory being shared (ensuring that event A happens after event B > > is > > not enough due to the way modern hardware works; you definitely need > > memory barriers too). > > Why would you need memory barries? Thread A creates some objects, > thread B (the dispatcher) uses them and destroys them afterwards. > Of course, if you pass references around, you need to make sure that > thread A doesn't manipulate the data while thread B is handling it, > yourself. Wrong! It's not that simple. Whenever two threads access the same data, both have to acquire the same mutex for any access to it whatsoever, be it reading or writing. The only situation where this rule doesn't apply is if thread A creates the data before launching thread B, and both threads never write to it again, or only thread B does and thread A never accesses it at all. I highly recommend reading Butenhof's Programming with POSIX Threads. In particular, memorize Chapter 3.4 Memory visibility between threads. Here's a table from that chapter: Time Thread 1 Thread 2 ----------------------------------------------------------------- t write "1" to address 1 (cache) t+1 write "2" to address 2 (cache) read "0" from address 1 t+2 cache system flushes address 2 t+3 read "2" from address 2 t+4 cache system flushes address 1 The point here is that there are no guarantees about memory ordering whatsoever. As it happens reading address 2 works by chance, but the read from address 1 returns the wrong value despite the fact that the read happens after the write was completed. Usage of special instructions is required to guarantee ordering, called "memory barriers". Locking/unlocking a mutex issues these instructions. --Daniel |
From: Daniel E. <dan...@gm...> - 2004-06-14 16:18:36
|
Am So, den 13.06.2004 um 16:33 Uhr +0200 schrieb Daniel Elstner: > Wrong! It's not that simple. Whenever two threads access the same > data, both have to acquire the same mutex for any access to it > whatsoever, be it reading or writing. The only situation where this > rule doesn't apply is if thread A creates the data before launching > thread B, and both threads never write to it again, or only thread B > does and thread A never accesses it at all. There's another exception: joining a thread completely synchronizes the memory of the thread being joined with the thread doing the joining. --Daniel |
From: Martin S. <mar...@hi...> - 2004-06-14 17:48:58
|
Am 13.06.2004 16:33:45 schrieb(en) Daniel Elstner: > Am So, den 13.06.2004 um 2:10 Uhr +0200 schrieb Martin Schulze: > > > > > > Hmm std::string is a perfect example of an argument type that > > > requires > > > special handling. > > > > Why? The slot object is completely initialized before the > dispatcher > > > knows of it. Note that sigc::bind does not take arguments as > references > > by default if this is where you are heading. > > std::string can be implemented with reference counting, and the > libstdc++ shipped with GCC does exactly that. Meaning that no deep copy of the string is made although it is passed "by value"?! Then I understand the problem here. (However, if you pass a "const char*" into the std::string ctor as in my example the copy is being created at once, isn't it?) > > > Even if it does this you still need mutex locking to protect > > > the memory being shared (ensuring that event A happens after > event > B > > > is > > > not enough due to the way modern hardware works; you definitely > need > > > memory barriers too). > > > > Why would you need memory barries? Thread A creates some objects, > > thread B (the dispatcher) uses them and destroys them afterwards. > > Of course, if you pass references around, you need to make sure > that > > > thread A doesn't manipulate the data while thread B is handling it, > > > yourself. > > Wrong! It's not that simple. Whenever two threads access the same > data, both have to acquire the same mutex for any access to it > whatsoever, be it reading or writing. The only situation where this > rule doesn't apply is if thread A creates the data before launching > thread B, and both threads never write to it again, or only thread B > does and thread A never accesses it at all. > > I highly recommend reading Butenhof's Programming with POSIX Threads. > In particular, memorize Chapter 3.4 Memory visibility between > threads. > > Here's a table from that chapter: > > Time Thread 1 Thread 2 > ----------------------------------------------------------------- > t write "1" to address 1 (cache) > t+1 write "2" to address 2 (cache) read "0" from address > 1 > t+2 cache system flushes address 2 > t+3 read "2" from address 2 > t+4 cache system flushes address 1 > > The point here is that there are no guarantees about memory ordering > whatsoever. As it happens reading address 2 works by chance, but the > read from address 1 returns the wrong value despite the fact that the > read happens after the write was completed. > > Usage of special instructions is required to guarantee ordering, > called > "memory barriers". Locking/unlocking a mutex issues these > instructions. I still don't see the problem in the case where no references/pointers are being passed around: The list of slots the dispatcher operates on _is_ protected by memory barriers (there might be bugs in my code but it is perfectly possible to simply use a mutex around 'std::list:: push_back()' / 'std::list::pop_front()' as I pointed out in a comment and as Christer does). Regards, Martin |
From:
<Ant...@is...> - 2004-06-13 19:41:12
|
IMHO you should study ACE message queue module http://www.dre.vanderbilt.edu/Doxygen/Current/html/ace/classACE__Message__Queue.html It focuses communication among threads. Regards, Antonio J. Saenz Isotrol, S.A. CTO > |
From: Daniel E. <dan...@gm...> - 2004-06-14 15:33:09
|
Am So, den 13.06.2004 um 18:30 Uhr +0200 schrieb Martin Schulze: > > > > > knows of it. Note that sigc::bind does not take arguments as > > references > > > by default if this is where you are heading. > > > > std::string can be implemented with reference counting, and the > > libstdc++ shipped with GCC does exactly that. > > Meaning that no deep copy of the string is made although it is passed > "by value"?! Then I understand the problem here. Exactly. > (However, if you pass a "const char*" into the std::string ctor as in > my example the copy is being created at once, isn't it?) Right. But you have to lock *before* creating the std::string object! > I still don't see the problem in the case where no references/pointers > are being passed around: The list of slots the dispatcher operates on > _is_ protected by memory barriers (there might be bugs in my code but > it is perfectly possible to simply use a mutex around 'std::list:: > push_back()' / 'std::list::pop_front()' as I pointed out in a comment > and as Christer does). Sure, but the caller passes in an already constructed std::string. As I said above, you need to lock before constructing the object. The only alternative is a deep copy (that's what I'm proposing for the improved Glib::Dispatcher implementation). --Daniel |
From: Christer P. <pa...@no...> - 2004-06-14 12:16:28
|
Daniel Elstner wrote: > > Sure, but the caller passes in an already constructed std::string. As I > said above, you need to lock before constructing the object. > Hmmm. Let's see what's going on here... 1. An string object is created. 2. The shared mutex is locked. 3. A shared copy of the string object is made. 4. The mutex is unlocked. 5. The original string object is destroyed. Now, the problem I see here is that the original string is destroyed after the mutex is unlocked. So if string isn't thread-safe this is a problem. But if fail to see how locking the mutex before creating the original string would make any difference. Successfully locking or unlocking a mutex is guaranteed to synchronize memory with respect to other threads regardless of whether that memory was touched before or after the mutex was locked. -- Christer Palm |
From: Daniel E. <dan...@gm...> - 2004-06-14 16:29:03
|
Am So, den 13.06.2004 um 23:47 Uhr +0200 schrieb Christer Palm: > 1. An string object is created. > 2. The shared mutex is locked. > 3. A shared copy of the string object is made. > 4. The mutex is unlocked. > 5. The original string object is destroyed. > > Now, the problem I see here is that the original string is destroyed > after the mutex is unlocked. So if string isn't thread-safe this is a > problem. That's indeed another problem. > But if fail to see how locking the mutex before creating the original > string would make any difference. Successfully locking or unlocking a > mutex is guaranteed to synchronize memory with respect to other threads > regardless of whether that memory was touched before or after the mutex > was locked. That's new to me. Do you have any reference on that? --Daniel |
From: Daniel E. <dan...@gm...> - 2004-06-14 15:37:19
|
Am Mo, den 14.06.2004 um 1:18 Uhr +0200 schrieb Daniel Elstner: > > But if fail to see how locking the mutex before creating the original > > string would make any difference. Successfully locking or unlocking a > > mutex is guaranteed to synchronize memory with respect to other threads > > regardless of whether that memory was touched before or after the mutex > > was locked. > > That's new to me. Do you have any reference on that? Okay, you're (partly) right. ("Partly" because it's not "locking or unlocking": what's needed is unlock in thread A and lock in thread B.) I found this in Butenhof: Whatever memory values a thread can see when it unlocks a mutex, either directly or by waiting on a condition variable, can also be seen by any thread that later locks the same mutex. Again, data written after the mutex is unlocked may not necessarily be seen by the thread that locks the mutex, even if the write occurs before the lock. In other words, the sequence pthread_mutex_lock(mutex); pthread_mutex_unlock(mutex); issues a memory barrier instruction on the unlock. The other thread that wants to read the data still has to lock the same mutex though. --Daniel |
From: Christer P. <pa...@no...> - 2004-06-14 17:34:52
|
Daniel Elstner wrote: > > Okay, you're (partly) right. ("Partly" because it's not "locking or > unlocking": what's needed is unlock in thread A and lock in thread B.) > I found this in Butenhof: > > Whatever memory values a thread can see when it unlocks a mutex, > either directly or by waiting on a condition variable, can also > be seen by any thread that later locks the same mutex. Again, > data written after the mutex is unlocked may not necessarily be > seen by the thread that locks the mutex, even if the write > occurs before the lock. > > In other words, the sequence > > pthread_mutex_lock(mutex); > pthread_mutex_unlock(mutex); > > issues a memory barrier instruction on the unlock. The other thread > that wants to read the data still has to lock the same mutex though. > A memory barrier, or synchronize, instruction is issued both on lock and unlock and also in a bunch of other thread related functions. Of course all threads need to agree on which mutex protects memory location X, that's how they make sure they doesn't execute a region of code that access memory location X simultaneously. Not because only certain memory locations are syncronized then the mutex is locked/unlocked. Having said that, is there any place in mine or Martins code where you believe that this rule isn't followed, except as a side effect of passing objects that contain internal references? This is what IEEE Std 1003.1-2004 has to say about memory synchronization requirements: 4.10 Memory Synchronization Applications shall ensure that access to any memory location by more than one thread of control (threads or processes) is restricted such that no thread of control can read or modify a memory location while another thread of control may be modifying it. Such access is restricted using functions that synchronize thread execution and also synchronize memory with respect to other threads. The following functions synchronize memory with respect to other threads: ... pthread_mutex_lock() ... pthread_mutex_unlock() ... -- Christer Palm |
From: Martin S. <mar...@hi...> - 2004-06-15 07:58:50
|
Am 14.06.2004 19:34:41 schrieb(en) Christer Palm: > Daniel Elstner wrote: >> >> Okay, you're (partly) right. ("Partly" because it's not "locking or >> unlocking": what's needed is unlock in thread A and lock in thread >> B.) >> I found this in Butenhof: >> >> Whatever memory values a thread can see when it unlocks a >> mutex, >> either directly or by waiting on a condition variable, can >> also >> be seen by any thread that later locks the same mutex. >> Again, >> data written after the mutex is unlocked may not necessarily >> be >> seen by the thread that locks the mutex, even if the write >> occurs before the lock. >> >> In other words, the sequence >> >> pthread_mutex_lock(mutex); >> pthread_mutex_unlock(mutex); >> >> issues a memory barrier instruction on the unlock. The other thread >> that wants to read the data still has to lock the same mutex though. >> > > A memory barrier, or synchronize, instruction is issued both on lock > and unlock and also in a bunch of other thread related functions. Of > course all threads need to agree on which mutex protects memory > location X, that's how they make sure they doesn't execute a region > of code that access memory location X simultaneously. Not because > only certain memory locations are syncronized then the mutex is > locked/unlocked. > > Having said that, is there any place in mine or Martins code where > you believe that this rule isn't followed, except as a side effect of > passing objects that contain internal references? > > > This is what IEEE Std 1003.1-2004 has to say about memory > synchronization requirements: > > 4.10 Memory Synchronization > > Applications shall ensure that access to any memory location by more > than one thread of control (threads or processes) is restricted such > that no thread of control can read or modify a memory location while > another thread of control may be modifying it. Such access is > restricted using functions that synchronize thread execution and also > synchronize memory with respect to other threads. The following > functions synchronize memory with respect to other threads: > > ... > pthread_mutex_lock() > ... > pthread_mutex_unlock() > ... This gives rise to an interesting question: If no locking is required (e.g. because atomic operations are used), which is the most efficient call to establish a memory barrier (e.g. before doing the atomic operation)? In a linux driver, I would call wmb(), but what can I do on the application side? Signal a dummy condition? Regards, Martin |
From: Christer P. <pa...@no...> - 2004-06-15 22:42:30
|
Martin Schulze wrote: > > This gives rise to an interesting question: If no locking is required > (e.g. because atomic operations are used), which is the most efficient > call to establish a memory barrier (e.g. before doing the atomic > operation)? In a linux driver, I would call wmb(), but what can I do on > the application side? Signal a dummy condition? > I'm pretty sure there isn't a portable way of doing that without locking. Even if threads agree on memory contents at the actual point of syncronization, they will not stay syncronized for long. The atomic operation needs to run within the scope of syncronization, and the mechanism to implement that is architecture dependent. It shouldn't really matter, though, as there's no portable way of issuing an atomic operation either. At least not in C/C++. -- Christer Palm |