libsigcx-main Mailing List for libSigC++ Extras (Page 3)
Status: Beta
Brought to you by:
rottmann
You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
(1) |
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
(6) |
Jul
(2) |
Aug
(2) |
Sep
(2) |
Oct
|
Nov
|
Dec
(3) |
2004 |
Jan
|
Feb
(2) |
Mar
(1) |
Apr
|
May
(5) |
Jun
(34) |
Jul
(1) |
Aug
|
Sep
|
Oct
(7) |
Nov
|
Dec
|
2005 |
Jan
|
Feb
|
Mar
(2) |
Apr
(5) |
May
(10) |
Jun
(1) |
Jul
(2) |
Aug
(3) |
Sep
|
Oct
(1) |
Nov
|
Dec
(1) |
2006 |
Jan
(2) |
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2007 |
Jan
|
Feb
|
Mar
(4) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(2) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Christer P. <pa...@no...> - 2004-06-14 17:34:52
|
Daniel Elstner wrote: > > Okay, you're (partly) right. ("Partly" because it's not "locking or > unlocking": what's needed is unlock in thread A and lock in thread B.) > I found this in Butenhof: > > Whatever memory values a thread can see when it unlocks a mutex, > either directly or by waiting on a condition variable, can also > be seen by any thread that later locks the same mutex. Again, > data written after the mutex is unlocked may not necessarily be > seen by the thread that locks the mutex, even if the write > occurs before the lock. > > In other words, the sequence > > pthread_mutex_lock(mutex); > pthread_mutex_unlock(mutex); > > issues a memory barrier instruction on the unlock. The other thread > that wants to read the data still has to lock the same mutex though. > A memory barrier, or synchronize, instruction is issued both on lock and unlock and also in a bunch of other thread related functions. Of course all threads need to agree on which mutex protects memory location X, that's how they make sure they doesn't execute a region of code that access memory location X simultaneously. Not because only certain memory locations are syncronized then the mutex is locked/unlocked. Having said that, is there any place in mine or Martins code where you believe that this rule isn't followed, except as a side effect of passing objects that contain internal references? This is what IEEE Std 1003.1-2004 has to say about memory synchronization requirements: 4.10 Memory Synchronization Applications shall ensure that access to any memory location by more than one thread of control (threads or processes) is restricted such that no thread of control can read or modify a memory location while another thread of control may be modifying it. Such access is restricted using functions that synchronize thread execution and also synchronize memory with respect to other threads. The following functions synchronize memory with respect to other threads: ... pthread_mutex_lock() ... pthread_mutex_unlock() ... -- Christer Palm |
From: Daniel E. <dan...@gm...> - 2004-06-14 16:29:03
|
Am So, den 13.06.2004 um 23:47 Uhr +0200 schrieb Christer Palm: > 1. An string object is created. > 2. The shared mutex is locked. > 3. A shared copy of the string object is made. > 4. The mutex is unlocked. > 5. The original string object is destroyed. > > Now, the problem I see here is that the original string is destroyed > after the mutex is unlocked. So if string isn't thread-safe this is a > problem. That's indeed another problem. > But if fail to see how locking the mutex before creating the original > string would make any difference. Successfully locking or unlocking a > mutex is guaranteed to synchronize memory with respect to other threads > regardless of whether that memory was touched before or after the mutex > was locked. That's new to me. Do you have any reference on that? --Daniel |
From: Daniel E. <dan...@gm...> - 2004-06-14 16:18:36
|
Am So, den 13.06.2004 um 16:33 Uhr +0200 schrieb Daniel Elstner: > Wrong! It's not that simple. Whenever two threads access the same > data, both have to acquire the same mutex for any access to it > whatsoever, be it reading or writing. The only situation where this > rule doesn't apply is if thread A creates the data before launching > thread B, and both threads never write to it again, or only thread B > does and thread A never accesses it at all. There's another exception: joining a thread completely synchronizes the memory of the thread being joined with the thread doing the joining. --Daniel |
From: Daniel E. <dan...@gm...> - 2004-06-14 15:43:56
|
Am So, den 13.06.2004 um 12:57 Uhr +0200 schrieb Christer Palm: > > Daniel Elstner wrote: > > > >> When I thought about adding signal parameter support to Glib::Dispatcher > >> a while ago, I played with the idea of making Glib::Dispatcher provide a > >> low-level interface for sending raw blocks of memory through the pipe. > >> On top of that there could be some kind of plugin interface that > >> requires you to implement two functions that convert your data to raw > >> memory and back again. > > > > > > Isn't this exactly what CORBA, for example, is all about? > > While it > > > > Whoops, it got chopped off... Here we go again. > > While I think that this would be great for a inter-process or > inter-network communication mechanism, I don't think it's a very good > idea for inter-thread communication. > > Serializing/deserializing is usually very inefficient and is also > extremely hard to do in a C++ environment. You'd need to know how to > marshal each and every class in the objects type as well as containment > hiearchy. Leaving all this to the implementor of the top-level class > will definitely break the basic OO principles, and will be bound to be > very error-prone. Wrong. Locking is inefficient. Serializing is much faster unless you're talking about serializing a whole file or something. And right, you do need to know how to serialize each and every object you use. Just as you'd need to know details about the implementation of a class, such as "does it use reference counting internally?" before you can be sure that simply locking a mutex actually works. --Daniel |
From: Daniel E. <dan...@gm...> - 2004-06-14 15:39:57
|
Am So, den 13.06.2004 um 11:39 Uhr +0200 schrieb Christer Palm: > > Even if it does this you still need mutex locking to protect > > the memory being shared (ensuring that event A happens after event B is > > not enough due to the way modern hardware works; you definitely need > > memory barriers too). > > > > Synchronous signals _does_ use a mutex behind the scenes to implement > the locking. Not that, AFAIK, mutexes does anything more than ensuring > that event A happens after event B. The problem is you need to lock before the data is being written. And mutexes don't ensure that event A happens after event B. Mutexes ensure that read/write A and read/write B don't happen at the same time, _and_ they issue memory barrier instructions to ensure memory visibility. > > Also, you can always use plain Glib::Dispatcher in conjunction with a > > Glib::Mutex to pass data around. This way you're forced to think about > > the locking which is IMHO a good thing. > > > > IMHO, this is like saying that function arguments are unnecessary, > because you could always use global variables to pass data around. Remember, we're talking about thread synchronization. This is not something to be taken lightly. Also note that nobody said that the locked data has to be global; you can easily store it in an object somewhere appropriate, or even put it into a queue. --Daniel |
From: Daniel E. <dan...@gm...> - 2004-06-14 15:37:19
|
Am Mo, den 14.06.2004 um 1:18 Uhr +0200 schrieb Daniel Elstner: > > But if fail to see how locking the mutex before creating the original > > string would make any difference. Successfully locking or unlocking a > > mutex is guaranteed to synchronize memory with respect to other threads > > regardless of whether that memory was touched before or after the mutex > > was locked. > > That's new to me. Do you have any reference on that? Okay, you're (partly) right. ("Partly" because it's not "locking or unlocking": what's needed is unlock in thread A and lock in thread B.) I found this in Butenhof: Whatever memory values a thread can see when it unlocks a mutex, either directly or by waiting on a condition variable, can also be seen by any thread that later locks the same mutex. Again, data written after the mutex is unlocked may not necessarily be seen by the thread that locks the mutex, even if the write occurs before the lock. In other words, the sequence pthread_mutex_lock(mutex); pthread_mutex_unlock(mutex); issues a memory barrier instruction on the unlock. The other thread that wants to read the data still has to lock the same mutex though. --Daniel |
From: Daniel E. <dan...@gm...> - 2004-06-14 15:35:24
|
Am So, den 13.06.2004 um 21:34 Uhr +0200 schrieb Christer Palm: > Hmm. Perhaps it would be better if you took a look at the code and told > me where the problem is? Sorry, just didn't have the time to do that. I consider this to be a conceptional problem though, not specific to your code. Do tell me if I'm missing something. > I'm not taking it lightly. My point was that although you could > technically accomplish the same thing without arguments, arguments is a > pretty useful feature. Agreed. > If you have similar code in perhaps hundreds of places doing similar > things, then it would make sense to attempt to make a generic > implementation of that pattern. Which is exactly what this is about. Indeed. The question is if we can write a generic implementation that actually works. --Daniel |
From: Daniel E. <dan...@gm...> - 2004-06-14 15:34:19
|
Am So, den 13.06.2004 um 2:10 Uhr +0200 schrieb Martin Schulze: > > > > Hmm std::string is a perfect example of an argument type that > > requires > > special handling. > > Why? The slot object is completely initialized before the dispatcher > knows of it. Note that sigc::bind does not take arguments as references > by default if this is where you are heading. std::string can be implemented with reference counting, and the libstdc++ shipped with GCC does exactly that. > > Even if it does this you still need mutex locking to protect > > the memory being shared (ensuring that event A happens after event B > > is > > not enough due to the way modern hardware works; you definitely need > > memory barriers too). > > Why would you need memory barries? Thread A creates some objects, > thread B (the dispatcher) uses them and destroys them afterwards. > Of course, if you pass references around, you need to make sure that > thread A doesn't manipulate the data while thread B is handling it, > yourself. Wrong! It's not that simple. Whenever two threads access the same data, both have to acquire the same mutex for any access to it whatsoever, be it reading or writing. The only situation where this rule doesn't apply is if thread A creates the data before launching thread B, and both threads never write to it again, or only thread B does and thread A never accesses it at all. I highly recommend reading Butenhof's Programming with POSIX Threads. In particular, memorize Chapter 3.4 Memory visibility between threads. Here's a table from that chapter: Time Thread 1 Thread 2 ----------------------------------------------------------------- t write "1" to address 1 (cache) t+1 write "2" to address 2 (cache) read "0" from address 1 t+2 cache system flushes address 2 t+3 read "2" from address 2 t+4 cache system flushes address 1 The point here is that there are no guarantees about memory ordering whatsoever. As it happens reading address 2 works by chance, but the read from address 1 returns the wrong value despite the fact that the read happens after the write was completed. Usage of special instructions is required to guarantee ordering, called "memory barriers". Locking/unlocking a mutex issues these instructions. --Daniel |
From: Daniel E. <dan...@gm...> - 2004-06-14 15:33:09
|
Am So, den 13.06.2004 um 18:30 Uhr +0200 schrieb Martin Schulze: > > > > > knows of it. Note that sigc::bind does not take arguments as > > references > > > by default if this is where you are heading. > > > > std::string can be implemented with reference counting, and the > > libstdc++ shipped with GCC does exactly that. > > Meaning that no deep copy of the string is made although it is passed > "by value"?! Then I understand the problem here. Exactly. > (However, if you pass a "const char*" into the std::string ctor as in > my example the copy is being created at once, isn't it?) Right. But you have to lock *before* creating the std::string object! > I still don't see the problem in the case where no references/pointers > are being passed around: The list of slots the dispatcher operates on > _is_ protected by memory barriers (there might be bugs in my code but > it is perfectly possible to simply use a mutex around 'std::list:: > push_back()' / 'std::list::pop_front()' as I pointed out in a comment > and as Christer does). Sure, but the caller passes in an already constructed std::string. As I said above, you need to lock before constructing the object. The only alternative is a deep copy (that's what I'm proposing for the improved Glib::Dispatcher implementation). --Daniel |
From: Daniel E. <dan...@gm...> - 2004-06-14 15:29:06
|
Am So, den 13.06.2004 um 21:23 Uhr +0200 schrieb Christer Palm: > I would say that this is quite dependent on the locking scheme, lock > contention potential, lock wait time, the complexity of the object > beeing serialized and whether you are talking of efficiency in terms of > lead time, consumption of CPU cycles or code size. But I happily stand > corrected if you could back that claim up. Granted, it depends on the situation. My opinion is mostly based on experience by others who work on realtime critical applications, such as Paul Davis' audio stuff. > I do agree that passing or sharing objects safely between threads, or > indeed, just making a copy of an object has similar limitations in C++. > However, we're not trying to solve that problem. The user _would_ still > have to take necessary precautions. > > If your stance is that you'd rather avoid doing something altogether if > there's no way of making it foolproof, then so be it. Nothing is foolproof. My point is that it's just too easy to use e.g. std::string objects without thinking about the consequences. --Daniel |
From: Daniel E. <dan...@gm...> - 2004-06-14 15:21:37
|
Am So, den 13.06.2004 um 22:18 Uhr +0200 schrieb Martin Schulze: > time thread A should do (*does) thread B [*does] > ------------------------------------------------------------------- > t construct slot > (*slot is partly constructed) > t+2 acquire lock > (*lock is acquired) > t+3 add slot to list > t+3 release lock > (*slot is added to list) > (*lock is released) > t+4 idle [*accesses slot] > t+5 (*construction of slot is finished) > > > Is this a possible scenario? I can't think properly about it at the > moment - too tired. Note that slot is copied again while adding to the > list during std::list::push_back(). I would assume that this copy is > fully initialized before the lock is actually released. Of course in > the case of our std::string this is still a problem because the newly > constructed string is a shallow copy of the old one. But for static > data types everything should be all right, shouldn't it? Yes. But at least in libsigc++ 1.2, slots were reference-counted. Has this changed in 2.0? --Daniel |
From: Daniel E. <dan...@gm...> - 2004-06-14 15:17:59
|
Am Mo, den 14.06.2004 um 10:07 Uhr +0200 schrieb Martin Schulze: > Hm... after reading: > > http://gcc.gnu.org/onlinedocs/libstdc++/faq/#5_6 > > I still cannot answer the question whether string (in the example from > above) _is_ thread-safe?! Well, std::string in libstdc++ uses atomic increment and decrement for the reference count. However, the problem is that you cannot rely on this since the standard doesn't say anything about it AFAIK. Also, I'm not sure if an atomic reference count is actually sufficient to always guarantee thread-safety... suppose you have: std::string a ("foo"); // in thread 1 std::string b (a); // in thread 2 b[2] = 'b'; // mutate to force copy (in thread 2) Now, atomic reference count or not, you have the situation that thread 2 reads memory written by thread 1 without any mutex locking. Well, unless atomic inc/dec also have the effect of a memory barrier, which would make them rather heavy... --Daniel |
From: Daniel E. <dan...@gm...> - 2004-06-14 14:10:27
|
Am Sa, den 12.06.2004 um 15:19 Uhr +0200 schrieb Christer Palm: > Thanks Martin! > > I "stole" the basic principles from your class to create my own variant > of a multi-argument Gtk::Dispatcher. As I think this is some pretty > useful stuff, I'm posting the result here. > > Perhaps this could be adapted to go into the Glibmm distribution? > Let me all know what you think. The problem is that I'm still not convinced that cross-thread signalling with arguments can be implemented correctly as a generic template. From Martin's description of the SigCX solution it appears that a so-called synchronous mode is used to get around that: disp.tunnel(sigc::bind(&foo, std::string("hello world"))); /* - call this function from any thread; - bind up to 7 arguments at once - pass 'true' as a second argument if the slot should be executed synchronously; - make use of signal::make_slot() if the dispatcher should emit a signal; */ Hmm std::string is a perfect example of an argument type that requires special handling. So assuming "synchronously" does what I think it does, why doesn't the example use this mode? How does this synchronous thing work? Does it really wait for the main loop to process the signal, and wouldn't that defeat the purpose of threads? Even if it does this you still need mutex locking to protect the memory being shared (ensuring that event A happens after event B is not enough due to the way modern hardware works; you definitely need memory barriers too). Also, you can always use plain Glib::Dispatcher in conjunction with a Glib::Mutex to pass data around. This way you're forced to think about the locking which is IMHO a good thing. When I thought about adding signal parameter support to Glib::Dispatcher a while ago, I played with the idea of making Glib::Dispatcher provide a low-level interface for sending raw blocks of memory through the pipe. On top of that there could be some kind of plugin interface that requires you to implement two functions that convert your data to raw memory and back again. I don't think it should be entirely automagic through templates since that'd give the user a false sense of security. The big advantage of course is avoidance of any locking whatsoever. Comments, corrections, insights, etc. will be appreciated. Cheers, --Daniel |
From: Paul D. <pa...@li...> - 2004-06-14 13:02:46
|
> m... after reading: > > http://gcc.gnu.org/onlinedocs/libstdc++/faq/#5_6 > >I still cannot answer the question whether string (in the example from >above) _is_ thread-safe?! std::string can never be considered thread safe. it is possible that some specific implementations of std::string are thread safe, either by design or by accident, but you should never assume this. std::string instances may share references to a common data buffer. multiple threads may be operating on multiple instances of std::string's which happen to share the data buffer. there is no interlocking, no mutexes and absolutely no memory barrier that controls these interactions. consider: std::string foo = "foobar; std::string bar = foo; // foo and bar both use the same data buffer thread 1 thread 2 ======== ======== foo[0] = 's'; // buffer holds soobar if (bar[0] == 'f') instant race condition if the implementation of std::string is devoid of mutexes. depending on the execution order of the two threads, thread 2 may mor may take a particular branch based on the first character of the string. designing thread safe data structures is tricky. std::string has never been intended to have these semantics, and its quite difficult to add them. the simplest approach is to use mutex protected copy-on-write, but even that has complications. if you need a thread safe string class, you will need to implement your own or find one on the net (i have never looked, but i imagine one might exist). --p |
From: Christer P. <pa...@no...> - 2004-06-14 13:02:26
|
Martin Schulze wrote: > > I've just updated a dispatcher class I've been using for libsigc++ 2.0. > It's based on sigcx but only implements a subset of the functionality. > Maybe it can serve as a starting point for the port or you can just use > it. I can't test it myself because I still haven't updated to gtkmm-2.4 > (low bandwidth connection :( ). Attached is a tarball that also > includes a simple (single-threaded) test program to illustrate the > usage: > Thanks Martin! I "stole" the basic principles from your class to create my own variant of a multi-argument Gtk::Dispatcher. As I think this is some pretty useful stuff, I'm posting the result here. Perhaps this could be adapted to go into the Glibmm distribution? Let me all know what you think. -- Christer Palm |
From: Christer P. <pa...@no...> - 2004-06-14 12:30:11
|
Daniel Elstner wrote: > Am So, den 13.06.2004 um 12:57 Uhr +0200 schrieb Christer Palm: >> >>Serializing/deserializing is usually very inefficient and is also >>extremely hard to do in a C++ environment. You'd need to know how to >>marshal each and every class in the objects type as well as containment >>hiearchy. Leaving all this to the implementor of the top-level class >>will definitely break the basic OO principles, and will be bound to be >>very error-prone. > > > Wrong. Locking is inefficient. Serializing is much faster unless > you're talking about serializing a whole file or something. I would say that this is quite dependent on the locking scheme, lock contention potential, lock wait time, the complexity of the object beeing serialized and whether you are talking of efficiency in terms of lead time, consumption of CPU cycles or code size. But I happily stand corrected if you could back that claim up. > And right, > you do need to know how to serialize each and every object you use. > Just as you'd need to know details about the implementation of a class, > such as "does it use reference counting internally?" before you can be > sure that simply locking a mutex actually works. > Not only do you need to know how to serialize the object, but you also need the code to do it. And if you don't have it, apart from actually writing it - where would it go? It should, and may have to go into the classes themselves, because of OO principles and the potential need to access private members. As much as I would like to have that in C++, it just isn't there. It seems to me that attempting to fix that in Glib just so that you could do cross-tread signalling is just way over the top. I do agree that passing or sharing objects safely between threads, or indeed, just making a copy of an object has similar limitations in C++. However, we're not trying to solve that problem. The user _would_ still have to take necessary precautions. If your stance is that you'd rather avoid doing something altogether if there's no way of making it foolproof, then so be it. -- Christer |
From: Christer P. <pa...@no...> - 2004-06-14 12:29:05
|
Daniel Elstner wrote: > > The problem is you need to lock before the data is being written. And > mutexes don't ensure that event A happens after event B. Mutexes ensure > that read/write A and read/write B don't happen at the same time, _and_ > they issue memory barrier instructions to ensure memory visibility. > Hmm. Perhaps it would be better if you took a look at the code and told me where the problem is? > > Remember, we're talking about thread synchronization. This is not > something to be taken lightly. Also note that nobody said that the > locked data has to be global; you can easily store it in an object > somewhere appropriate, or even put it into a queue. > I'm not taking it lightly. My point was that although you could technically accomplish the same thing without arguments, arguments is a pretty useful feature. If you have similar code in perhaps hundreds of places doing similar things, then it would make sense to attempt to make a generic implementation of that pattern. Which is exactly what this is about. -- Christer Palm |
From: Christer P. <pa...@no...> - 2004-06-14 12:16:28
|
Daniel Elstner wrote: > > Sure, but the caller passes in an already constructed std::string. As I > said above, you need to lock before constructing the object. > Hmmm. Let's see what's going on here... 1. An string object is created. 2. The shared mutex is locked. 3. A shared copy of the string object is made. 4. The mutex is unlocked. 5. The original string object is destroyed. Now, the problem I see here is that the original string is destroyed after the mutex is unlocked. So if string isn't thread-safe this is a problem. But if fail to see how locking the mutex before creating the original string would make any difference. Successfully locking or unlocking a mutex is guaranteed to synchronize memory with respect to other threads regardless of whether that memory was touched before or after the mutex was locked. -- Christer Palm |
From: Christer P. <pa...@no...> - 2004-06-14 12:11:30
|
A few comments in addition to what Martin already wrote... Daniel Elstner wrote: > > How does this synchronous thing work? Does it really wait for the main > loop to process the signal, and wouldn't that defeat the purpose of > threads? It's just a convenient form of locking for those apps that wants/needs it. Usually, it is perfectly acceptable to block until the signal handler returns. It definitely doesn't defeat the purpose of threads. > Even if it does this you still need mutex locking to protect > the memory being shared (ensuring that event A happens after event B is > not enough due to the way modern hardware works; you definitely need > memory barriers too). > Synchronous signals _does_ use a mutex behind the scenes to implement the locking. Not that, AFAIK, mutexes does anything more than ensuring that event A happens after event B. > Also, you can always use plain Glib::Dispatcher in conjunction with a > Glib::Mutex to pass data around. This way you're forced to think about > the locking which is IMHO a good thing. > IMHO, this is like saying that function arguments are unnecessary, because you could always use global variables to pass data around. > When I thought about adding signal parameter support to Glib::Dispatcher > a while ago, I played with the idea of making Glib::Dispatcher provide a > low-level interface for sending raw blocks of memory through the pipe. > On top of that there could be some kind of plugin interface that > requires you to implement two functions that convert your data to raw > memory and back again. Isn't this exactly what CORBA, for example, is all about? While it > I don't think it should be entirely automagic > through templates since that'd give the user a false sense of security. > The big advantage of course is avoidance of any locking whatsoever. > > Comments, corrections, insights, etc. will be appreciated. > > Cheers, > --Daniel > > > _______________________________________________ > gtkmm-list mailing list > gtk...@gn... > http://mail.gnome.org/mailman/listinfo/gtkmm-list > |
From: Christer P. <pa...@no...> - 2004-06-14 12:06:11
|
Christer Palm wrote: > A few comments in addition to what Martin already wrote... > > Daniel Elstner wrote: > >> When I thought about adding signal parameter support to Glib::Dispatcher >> a while ago, I played with the idea of making Glib::Dispatcher provide a >> low-level interface for sending raw blocks of memory through the pipe. >> On top of that there could be some kind of plugin interface that >> requires you to implement two functions that convert your data to raw >> memory and back again. > > > Isn't this exactly what CORBA, for example, is all about? > While it > Whoops, it got chopped off... Here we go again. While I think that this would be great for a inter-process or inter-network communication mechanism, I don't think it's a very good idea for inter-thread communication. Serializing/deserializing is usually very inefficient and is also extremely hard to do in a C++ environment. You'd need to know how to marshal each and every class in the objects type as well as containment hiearchy. Leaving all this to the implementor of the top-level class will definitely break the basic OO principles, and will be bound to be very error-prone. -- Christer Palm |
From: Timothy M. S. <ts...@k-...> - 2004-06-13 19:47:52
|
Christer Palm wrote: > Not only do you need to know how to serialize the object, but you also > need the code to do it. And if you don't have it, apart from actually > writing it - where would it go? It should, and may have to go into the > classes themselves, because of OO principles and the potential need to > access private members. > > As much as I would like to have that in C++, it just isn't there. It > seems to me that attempting to fix that in Glib just so that you could > do cross-tread signalling is just way over the top. Can't comment on the threading-specific issues raised in this argument, but I do have to correct this common misperception that C++ doesn't have serialization - it most certainly does, using the OO, type-safe iostreams interface. See boost::lexical_cast for a trivial-but-effective tool for putting those serialization capabilities to work. Tim Shead |
From:
<Ant...@is...> - 2004-06-13 19:41:12
|
IMHO you should study ACE message queue module http://www.dre.vanderbilt.edu/Doxygen/Current/html/ace/classACE__Message__Queue.html It focuses communication among threads. Regards, Antonio J. Saenz Isotrol, S.A. CTO > |
From: Parnell F. <par...@me...> - 2004-06-12 13:41:33
|
I apologise for my last e-mail I had not bothered to look through the archives before asking about porting SigCX to sigc++ 2.0. I would gladly help to port sigcx, I am using it on a project that I would like to port over to gtkmm2.4. I have been thinking about doing an "underground" port of sigcx for my own purposes but I would much rather work with you. I am very pleased with the elegance and functionality of sigcx and I would hate to see it "stall out". Parnell |
From: Parnell F. <par...@me...> - 2004-06-11 13:16:31
|
I was just wondering when if ever a port to libsigc++ might appear? Parnell |
From: Martin S. <mar...@hi...> - 2004-05-31 14:44:23
|
Am 2004.05.30 19:42 schrieb(en) Christer Palm: > Hi Martin! >=20 > Martin Schulze wrote: >>=20 >> Which classes from libSigCX do you need? At some time I had some >> random thoughts about libSigCX & sigc++-2.0 - Some things should be >> much easier to implement with the new API. >> >=20 > My app depends rather heavily on SigCX::GtkDispatcher and SigCX::=20 > ThreadTunnel to do cross-thread signalling. I also pass around a lot =20 > of signal arguments by reference, so I also depend on beeing able to =20 > have fully synchronous signals. I've just updated a dispatcher class I've been using for libsigc++ 2.0. It's based on sigcx but only implements a subset of the functionality. Maybe it can serve as a starting point for the port or you can just use =20 it. I can't test it myself because I still haven't updated to gtkmm-2.4 (low bandwidth connection :( ). Attached is a tarball that also includes a simple (single-threaded) test program to illustrate the usage: void foo(std::string text) { std::cout << text.c_str() << std::endl; } int main(int argc, char *argv[]) { Glib::thread_init(); Gtk::Main myApp(argc, argv); sigc::GtkmmDispatcher disp; disp.tunnel(sigc::bind(&foo, std::string("hello world"))); /* - call this function from any thread; - bind up to 7 arguments at once - pass 'true' as a second argument if the slot should be executed synchronously; - make use of signal::make_slot() if the dispatcher should emit a signal; */ } The dispatcher tunnels slots across threads. It uses a Glib::Dispatcher internally to trigger their execution. For performance reasons an extra =20 list class is included to store the slots: 'ringbuffer' - a fixed-size =20 list that supports multithreaded addition/removal of objects without =20 any locks. (It can be replaced by a std::list and a mutex easily.) To pass return values across threads an additional adaptor class would be needed. Regards, Martin > Perhaps it would be a good idea to extend the existing Glib::=20 > Dispatcher mechanism to provide the functionality of SigCX? IMO, =20 > Glib::Dispatcher is rather useless in its current form, and yet, =20 > cross-thread signalling is definitely one of the, if not _the_, most =20 > convenient and elegant way of doing multithreaded gtkmm =20 > programming... >=20 > -- > Christer Palm |