On Nov 8, 2012, at 10:29 AM, Roy Stogner <roystgnr@...> wrote:
> On Thu, 8 Nov 2012, Kirk, Benjamin (JSC-EG311) wrote:
>> As of yesterday I've added support for the other multithreading use
>> case, asynchronous execution of an operation that would otherwise
>> block. This is commonly used in GUI programming but not needed as
>> frequently in scientific computing. Still, there are some use cases
>> where I've wanted it, and now it is available. The first such
>> occurrence is overlapping disk I/O with MPI communication.
>> Note that boost::thread is also a workable alternative here, and
>> I'll probably add support for it too along the lines of
> This sounds good (especially combined with some of the ideas I've been
> hashing out with Vikram for transient problems' I/O), but I have
> We can't standardize on std::thread without requiring C++11, but is
> there any reason not to standardize on just one of tbb::thread or
> boost::thread as the fallback? With "BEST_UNORDERED_MAP" there's
> three different possible hash map implementations that users might
> have available (C++11, TR1, GNU) and we'd generally prefer any of the
> above over the always-available std::map fallback, so we need a lot of
> options. But if we're including a chunk of Boost, can't we just use
> boost::thread and be done with it?
Well, just like we don't require MPI or TBB I don't want to require this. And the boost::thread option unfortunately requires a boost library - it is not one of the 'header only' features, so we can't really distribute it easily like we do other parts.
Maybe I wasn't clear, but in all libMesh code we will have
And I'd like that to work if std::thread is there, or tbb::tbb_thread, or boost::thread, or none of the above.
> With TBB style threading, the single-threaded fallback is simply "one
> thread handles the entire range". What's the single-threaded fallback
> for asynchronous threading? The thread's function doesn't actually
> get started executing until the join() call?
Almost. The thread constructor begins execution of the function, and the function is not required to *return* until the join() is invoked.
The fallback is that the constructor executes the function (making it blocking again) and then join() is a no-op. Because the function needs to be *ready* to execute in its entirety at construction, this seemed the most logical.
> If we start using MPI I/O (or PHDF5, etc), is this going to be a
> conflict? We currently don't try to do any MPI communication from
> within threads so we don't care about whether our MPI stacks are
> thread-safe or not. That might have to change if we don't want to
> foreclose the possibility of using an underlying parallel I/O library,
Certainly could be. But this is implemented for serial IO on a thread. If we switch to MPI/IO then we probably need to use it exclusively for IO and not try to second guess or circumvent it. Note right now I am using this to overlap MPI Communication and serial IO, which isn't an issue. If we move to a parallel IO library underneath, certainly there will be some code that changes.
And for our use cases going forward I don't see a need to invoke MPI from the thread(s) - while this is supported for most operations by most MPI stacks now it seems like an unnecessary complication.
Aside from the use case we have right now, this could be used for example by detaching a "sleeper thread" which periodically checks for the existence of a file or something, and takes action when it occurs, but really the opportunities are endless.