From: Greg H. <gh...@ps...> - 2006-08-02 21:03:44
|
Upi, I have spent the past week looking through the MOOSE code and documentation, and am trying to get a good understanding of it before deciding on the best way of parallelizing it. I don't feel that I have yet reached that point, but in the meantime I have several concerns that I'd like to discuss with you: 1. pointers versus handles The typical way that an element is referenced within MOOSE is by a pointer (Element *). What this effectively means is that objects are frozen to a particular memory location on a particular node, and it will be next to impossible to later move them to different nodes. Is that limitation acceptable to you? The alternative would be to use some sort of handle that is a persistent identifier of that object, regardless of its location. 2. ReturnFinfo I'm not exactly sure what sort of thing ReturnFinfo is supposed to be used for. (I couldn't find an instance of its use in the examples.) My interpretation of its function is that it immediately returns a value from the receiving element to the sending element. This will cause trouble if the sending element is on a different node than the receiving element, because the sender will have to be delayed while MPI messages are sent and received. This means that the sender will need its own thread, unless all computation is to be blocked while waiting for MPI (a very bad thing). If the sender is in its own thread, then much locking will be needed to ensure that other threads on that node do not stomp on each other. 3. mpp preprocessor If someone has an X.mh file and runs it through the mpp preprocessor, then they get X.h, XWrapper.h, and XWrapper.cpp files out. Under what circumstances does one need to hand modify these files? I would argue that this should never be happen. Here's a thought: why not have a single X.moose file that contains all information about the MOOSE classes, and let mpp translate this into read-only X.h, X.cpp, XWrapper.h, and XWrapper.cpp files, possibly locating these in a C++/ subdirectory so they will not clutter up the developer's working directory. The MOOSE developer should not have to directly deal with these .h and .cpp files anyway. By judicious use of #line directives in the .h and .cpp files, any errors in the C++ compilations can be referred back to the original X.moose file. 4. semantics of "messages" needs to be clearer I have read various documents under DOCS/ that describe messages, but I'm still fuzzy on exactly how they work, and why all the different types of messages are needed. It would be good to have some concrete examples to refer to, in order to make this clearer. For example, I'm not clear how the clocks for the source and destination elements affect transmission of information across messages (if at all). Also, I'd still like to push to change the name of a "message" to something else (connection? link?), since "message" in CS terminology has a connotation of a one-shot delivery of information. Similarly, to conform to general usage, "object" should replace "element", and "class" should replace "object". 5. Why are synapses treated separately in the Moose header files? Since synapses are just messages, why should they be given a separate section within a Moose class definition? 6. How would variable-timestep methods be handled? The clock-based scheduling appears very similar to that found in GENESIS 2. How do you envision an element that is capable of variable-timestep updating being handled? I don't know if you prefer to discuss these issues in e-mail, or want to try to do a teleconference (possibly with VNC???) with those of us interested in this level of detail. Let me know what works best for you. Thanks. --Greg |
From: Upinder S. B. <bh...@nc...> - 2006-08-03 03:23:12
|
Hi, Greg, Thanks for the feedback. I hope you have had a chance to look at revision 5 or later from the SVN repository. Here are my responses. Greg Hood said: > > 1. pointers versus handles > The typical way that an element is referenced within MOOSE is by a pointer > (Element *). What this effectively means is that objects are frozen to a particular memory location on a particular node, and it will be next to > impossible to later move them to different nodes. Is that limitation acceptable to you? The alternative would be to use some sort of handle that is a persistent identifier of that object, regardless of its location. As you say, pointer reference is not a good idea, and is meant to be encapsulated in messages. Note that the object hierarchy (/root/foo/bar) is maintained through 'child' messages. These messages are the correct handle to use. I currently permit pointer reference for atomic operations like tree searches. The idea is that these operations should be local and should be instantaneous, ie, no other operation should mess with the tree structure while they occur. Messages as handles are quite general and can stretch across nodes, much like the older GENESIS messages. As it turns out, the solver design which I'm working on also is purely message based. In principle one could have solved objects on multiple nodes. Why anyone would want to do this is another matter! > > 2. ReturnFinfo > I'm not exactly sure what sort of thing ReturnFinfo is supposed to be used > for. (I couldn't find an instance of its use in the examples.) My interpretation of its function is that it immediately returns a value fro= m the receiving element to the sending element. This will cause trouble if the sending element is on a different node than the receiving element, because the sender will have to be delayed while MPI messages are sent an= d received. This means that the sender will need its own thread, unless al= l computation is to be blocked while waiting for MPI (a very bad thing). I= f the sender is in its own thread, then much locking > will be needed to ensure that other threads on that node do not stomp o= n each other. > Yes, ReturnFinfo is meant to return a value immediately. Example is table lookup in channel calculations. Multiple differnt channel instances will want to look up the same alpha/beta table, and will each have their own values to find. Yes, it is bad to run this across nodes. The system should object if this is attempted. As you can see from the example above, the purpose of this message is to avoid the use of pointers but still permit shared resources. If you want to do this across nodes you can duplicate the target. I haven't yet froze= n the semantics, but as presently used, ReturnFinfo is readonly. > 3. mpp preprocessor > If someone has an X.mh file and runs it through the mpp preprocessor, then > they get X.h, XWrapper.h, and XWrapper.cpp files out. Under what circumstances > does one need to hand modify these files? I would argue that this should > never be happen. Here's a thought: why not have a single X.moose file that > contains all information about the MOOSE classes, and let mpp translate this > into read-only X.h, X.cpp, XWrapper.h, and XWrapper.cpp files, possibly locating these in a C++/ subdirectory so they will not clutter up the developer's working directory. The MOOSE developer should not have to directly deal with these .h and .cpp files anyway. By judicious use of #line directives in the .h and .cpp files, any errors in the C++ compilations can be referred back to the original X.moose file. This is almost exactly what I would like and am working toward. At this point the preprocessor is incomplete, partly because the base code still has some way to go. But you'll find that revision 5 is _much_ improved an= d can do about 95% of the job of generating X.h, XWrapper.h and XWrapper.cp= p files without user intervention. More like 100% in the case of straightforward classes, which are what most users would need. One difference is that the X.cpp file is intentionally not supported. The idea is that this is stuff that the user will want to code independently of the wrapper stuff. For example, if there were serious calculations needed inside an object, we don't want to squash it all into the X.mh fil= e (think of the X.mh file as a super 'header' file). These would then go into X.cpp. The X.cpp functions need not know about MOOSE at all. BTW, I am not keen to use #lines. Phenomenally ugly! > > 4. semantics of "messages" needs to be clearer > I have read various documents under DOCS/ that describe messages, but I'm > still fuzzy on exactly how they work, and why all the different types o= f messages are needed. It would be good to have some concrete examples to refer to, in order to make this clearer. For example, I'm not clear how the clocks for the source and destination elements affect transmission of > information across messages (if at all). Also, I'd still like to push to change the name of a "message" to something else (connection? link?), since "message" in CS terminology has a connotation of a one-shot delivery > of information. Similarly, to conform to general usage, "object" shoul= d replace "element", and "class" should replace "object". > Two points here: Message functionality and the name message. Message functionality: Think of them as remote function calls with persistent traversal information. Remote in this context means to another object, wherever it lives. I have a list somewhere in the documentation o= f all the message categories, 7 at last count. Yes, this is a big number an= d could probably be tightened. I want to first get all the base code stuff done (solvers and parallelization in particular still remain) before assessing how the different varieties of message behave and seeing if the= y can be condensed without loss of efficiency. The name 'message': this is a hangover from GENESIS. I actually use the term connection for the underlying traversal framework, and 'message' for the overall construct. I don't see how we can easily rename it without breaking backward compatibility assumptions, but I am quite open to suggestions. As I recall we have had this discussion before, inconclusively. > 5. Why are synapses treated separately in the Moose header files? Since synapses are just messages, why should they be given a separate section within a Moose class definition? This is just because they are a special kind of message. Synapses associate extra information with each message (weight, delay, etc) which is allocated on a per-message basis and is not part of the argument list. > > 6. How would variable-timestep methods be handled? > The clock-based scheduling appears very similar to that found in GENESI= S Solvers handle variable-timestep stuff. When a solver is set up, it inserts itself into the scheduling object hierarchy, and replaces the usual clocktick object that calls the equivalent of the PROCESS action. Then it is up to the solver to decide how to interface with the schedulin= g calls. Individual variable-timestep solvers are no problem in this context. They get the universal clock ticks from the scheduler and make sure that they are within the appropriate time window. I've already done something like this in GENESIS 2 for the Gillespie solver. Multiple variable-timestep solvers may need to negotiate if they depend o= n mutual data. 2. > How do you envision an element that is capable of variable-timestep updating being handled? See above. > > I don't know if you prefer to discuss these issues in e-mail, or want t= o try to do a teleconference (possibly with VNC???) with those of us intere= sted > in this level of detail. Let me know what works best for you. I think these are very good questions to bring up, and they should be on record at the MOOSE site to help developers as they dig in. Thanks, Upi |
From: Josef S. <js...@ya...> - 2006-08-03 14:44:50
|
--- "Upinder S. Bhalla" <bh...@nc...> wrote: > Greg Hood said: > > 3. mpp preprocessor > > If someone has an X.mh file and runs it through the mpp preprocessor, > then > > they get X.h, XWrapper.h, and XWrapper.cpp files out. Under what > circumstances > > does one need to hand modify these files? I would argue that this > should > > never be happen. Here's a thought: why not have a single X.moose file > that > > contains all information about the MOOSE classes, and let mpp translate > this > > into read-only X.h, X.cpp, XWrapper.h, and XWrapper.cpp files, possibly > locating these in a C++/ subdirectory so they will not clutter up the > developer's working directory. The MOOSE developer should not have to > directly deal with these .h and .cpp files anyway. By judicious use of > #line directives in the .h and .cpp files, any errors in the C++ > compilations can be referred back to the original X.moose file. > > This is almost exactly what I would like and am working toward. At this > point the preprocessor is incomplete, partly because the base code still > has some way to go. But you'll find that revision 5 is _much_ improved and > can do about 95% of the job of generating X.h, XWrapper.h and XWrapper.cpp > files without user intervention. More like 100% in the case of > straightforward classes, which are what most users would need. > One difference is that the X.cpp file is intentionally not supported. The > idea is that this is stuff that the user will want to code independently > of the wrapper stuff. For example, if there were serious calculations > needed inside an object, we don't want to squash it all into the X.mh file > (think of the X.mh file as a super 'header' file). These would then go > into X.cpp. The X.cpp functions need not know about MOOSE at all. > BTW, I am not keen to use #lines. Phenomenally ugly! I agree that these are ugly, but you can take some license with generated code since the hope is that ultimately noone should ever have to look at it. js...@ya... Software Engineer Linux/OSX C/C++/Java |
From: Greg H. <gh...@ps...> - 2006-08-28 23:17:00
|
Upi, I have been looking at the MOOSE code, and thinking about certain issues involved in parallelizing it, and have some serious concerns. The greatest concern I have is with the many places in the basecode that make an implicit assumption that elements are locally resident in the nodes's memory, and that only one thread will be actively modifying them. For example, if the elements are distributed over many nodes, then Element::relativeFind() will potentially require information on 2 or more nodes. This will cause the code to block for indefinite periods of time while the interprocess communication is performed and the remote nodes do what they need to do. The simplest way of dealing with this would be to only allow one active thread over the entire set of nodes on which MOOSE is running. However, this would be disastrous in term of performance -- network setup would be much slower than doing it on a single node. If we allow multiple active threads on each node to avoid the performance hit, then every method that directly or indirectly calls one of these methods that require off-node information will potentially block. While this occurs, incoming requests from other nodes must be handled, and some of those may involve the Element in question. Some form of locking will thus be needed (probably on a per-Element basis). The difficult thing is that each of the places in the code where a potentially blocking call will occur will have to release the Element lock, and must leave the Element (as well as any kernel data structures) in a safe and consistent state. I can't see this being done without rewriting many sections of code. The most troublesome situations will be when modifications are being made to the element tree, such as when new elements are being created or old ones destroyed. Once the network is set up, things may not be so bad, but the network needs to get set up in order to run it. One solution may be to standardize at the .mh level. The existing MOOSE code could support running models (i.e., a script + a set of .mh files) on serial machines, and we could have a separately developed parallel version that can run the same models. A few changes would probably still be needed to the existing .mh files, but probably not too many. This approach might make sense if nearly all the visualization and other add-on code would be at the .mh level or higher, but not if those things require major changes to the existing basecode. If you have thought of solutions to any of these problems, I would be interested in hearing about them. --Greg |
From: Michael E. <mie...@gm...> - 2006-08-29 02:15:21
|
Hardware is also moving in an increasingly thread optimized direction, so making moose thread friendly will go a long way to making it run well on future machines. On 8/28/06, Greg Hood <gh...@ps...> wrote: > Upi, > I have been looking at the MOOSE code, and thinking about certain issues > involved in parallelizing it, and have some serious concerns. > > The greatest concern I have is with the many places in the basecode > that make an implicit assumption that elements are locally resident in > the nodes's memory, and that only one thread will be actively > modifying them. For example, if the elements are distributed over > many nodes, then Element::relativeFind() will potentially require > information on 2 or more nodes. This will cause the code to block for > indefinite periods of time while the interprocess communication is > performed and the remote nodes do what they need to do. The simplest > way of dealing with this would be to only allow one active thread over > the entire set of nodes on which MOOSE is running. However, this > would be disastrous in term of performance -- network setup would be > much slower than doing it on a single node. If we allow multiple > active threads on each node to avoid the performance hit, then every > method that directly or indirectly calls one of these methods that > require off-node information will potentially block. While this > occurs, incoming requests from other nodes must be handled, and some > of those may involve the Element in question. Some form of locking will > thus be needed (probably on a per-Element basis). The difficult thing > is that each of the places in the code where a potentially blocking > call will occur will have to release the Element lock, and must leave > the Element (as well as any kernel data structures) in a safe and > consistent state. I can't see this being done without rewriting many > sections of code. The most troublesome situations will be when > modifications are being made to the element tree, such as when new > elements are being created or old ones destroyed. Once the network is > set up, things may not be so bad, but the network needs to get set up > in order to run it. > > One solution may be to standardize at the .mh level. The existing > MOOSE code could support running models (i.e., a script + a set of .mh > files) on serial machines, and we could have a separately developed > parallel version that can run the same models. A few changes would > probably still be needed to the existing .mh files, but probably not > too many. This approach might make sense if nearly all the visualization > and other add-on code would be at the .mh level or higher, but not if > those things require major changes to the existing basecode. > > If you have thought of solutions to any of these problems, I would > be interested in hearing about them. > --Greg > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Moose-g3-devel mailing list > Moo...@li... > https://lists.sourceforge.net/lists/listinfo/moose-g3-devel > |
From: Josef S. <js...@ya...> - 2006-08-29 09:50:32
|
--- Michael Edwards <mie...@gm...> wrote: > Hardware is also moving in an increasingly thread optimized direction, > so making moose thread friendly will go a long way to making it run > well on future machines. > Agreed. There is definitely some work to do in this regard. Keep in mind we didn't gain much by using the STL. The most you can even hope for in any STL implementation is that: 1)multiple readers are safe and 2)multiple writers to _different_ containers are safe. That ain't much and even these aren't guaranteed. I think the boost (boost.org) libraries would be particularly helpful here, especially the portable threads and reference-counted pointers (sorry. I'm a geek. This is a development mailing list after all). joe js...@ya... Software Engineer Linux/OSX C/C++/Java |
From: Josef S. <js...@ya...> - 2006-08-29 09:34:25
|
Hi Greg, --- Greg Hood <gh...@ps...> wrote: > Upi, > I have been looking at the MOOSE code, and thinking about certain issues > involved in parallelizing it, and have some serious concerns. > > The greatest concern I have is with the many places in the basecode > that make an implicit assumption that elements are locally resident in > the nodes's memory, and that only one thread will be actively > modifying them. For example, if the elements are distributed over > many nodes, then Element::relativeFind() will potentially require > information on 2 or more nodes. This will cause the code to block for > indefinite periods of time while the interprocess communication is > performed and the remote nodes do what they need to do. The simplest > way of dealing with this would be to only allow one active thread over > the entire set of nodes on which MOOSE is running. However, this > would be disastrous in term of performance -- network setup would be > much slower than doing it on a single node. If we allow multiple > active threads on each node to avoid the performance hit, then every > method that directly or indirectly calls one of these methods that > require off-node information will potentially block. While this > occurs, incoming requests from other nodes must be handled, and some > of those may involve the Element in question. Some form of locking will > thus be needed (probably on a per-Element basis). The difficult thing > is that each of the places in the code where a potentially blocking > call will occur will have to release the Element lock, and must leave > the Element (as well as any kernel data structures) in a safe and > consistent state. I can't see this being done without rewriting many > sections of code. The most troublesome situations will be when > modifications are being made to the element tree, such as when new > elements are being created or old ones destroyed. Once the network is > set up, things may not be so bad, but the network needs to get set up > in order to run it. Moose has a solid foundation using decent design patterns. Since these patterns were introduced, several hurricanes struck. My guess is that among them were lack of a cohesive and comprehensive architectural design, personnel changes, premature optimization, and just wanting to get some pieces done. I was going to warn you about these issues. I think it's completely unreasonable to try to parallelize the code in it's current incarnation. In a nutshell, it's just entirely too tightly coupled with very little apparent cohesion in any of the classes: Elements contain connections, connections contain Elements, Fields contain connections and Elements, etc, etc, etc. This has lead to the dreaded "header.h", AKA KitchenSink.h. I've been trying to get the code back in line with the (maybe just apparent) original design patterns. It's difficult. What's most intimidating is that all roads lead to rewriting the moose pre processor and subsequently regenerating a bunch of code from the .mh files. The problem is that many of the generated files have since been edited by hand. Ugh. The core moose needs to be a library - some thing to be used by programmers. Creating libraries requires a greater attention to implementation details in order to provide accepted and expected behaviors. We need to be able to present a clean API to this library, ideally through swig. The genesis parser, ReadCell, Plot, etc should use the API to access the core library. However, core architectural issues must be addressed before this will be attainable. > [...] > --Greg joe js...@ya... Software Engineer Linux/OSX C/C++/Java |
From: Upinder S. B. <bh...@nc...> - 2006-08-29 05:26:40
|
Dear Greg et al, Thanks for raising some very important and interesting points. I have not yet thought much about parallel model loading, because I don't have much idea about how much of a bottleneck it might be. Before I dive into the details, this is my earlier line of thought; please comment on it. 1. Threads: I had considered restricting multithreading to solvers, on a per-node basis, for some of the reasons Greg has outlined. 2. RelativeFind etc: I had considered caching info on the postmaster to speed up the process of finding remote objects, and grouping requests for remote-node element info. 3. Parallel model building: I thought that almost all cases where this would be critical would be through special calls like createmap, region-connect, and perhaps copy. Most of these can be rather cleanly don= e in parallel with minimal internode communication. However, a global checkpoint would be needed to ensure synchrony between these calls. I should also add that the divide between setup time and runtime is probably not so clean and we will definitely need to figure out efficient ways of handling this. For example, in signalling simulations I have already had issues where new organelles are budding off and being destroyed at runtime. To consider Greg's points: > The greatest concern I have is with the many places in the basecode tha= t make an implicit assumption that elements are locally resident in the nodes's memory, and that only one thread will be actively > modifying them. (...) > Some form of locking will thus be needed (probably on a per-Element bas= is). Couldn't we put a lock at an appropriate place in an element tree, but permit other element trees to be accessed safely ? >The most troublesome situations will be when > modifications are being made to the element tree, such as when new >elements are being created or old ones destroyed. Can we have a lock set whenever 'dangerous' commands are being executed? Most commands at runtime are relatively safe. > One solution may be to standardize at the .mh level.... This approach > might make sense if nearly all the > visualization and other add-on code would be at the .mh level or higher= , > but not if those things require major changes to the existing basecode. I'm not sure what you have in mind here. To me it looks like all the locking stuff should be done at the basecode level, so the user does not need to know about it even if they are developing new objects using .mh files. Could you expand on it? -- Upi |
From: Greg H. <gh...@ps...> - 2006-08-29 23:24:07
|
Upinder S. Bhalla writes: > Dear Greg et al, > Thanks for raising some very important and interesting points. I have > not yet thought much about parallel model loading, because I don't have > much idea about how much of a bottleneck it might be. Efficient parallel model loading (or setup) is definitely important; this sort of thing can quickly become the bottleneck in running a large simulation. Setup time is, of course, model-dependent, but one data point I can cite is the large PGENESIS cerebellar model that was run on our T3E several years ago by Fred Howell et al.: -- for their largest model (on 128 nodes) the setup time (done in parallel) was taking 65% as long as the time for doing the actual simulation. > Before I dive into > the details, this is my earlier line of thought; please comment on it. > > 1. Threads: I had considered restricting multithreading to solvers, on a > per-node basis, for some of the reasons Greg has outlined. While solvers should definitely be able to take advantage of multithreading, I'm uncomfortable restricting everything else to just one thread. For instance, the GUI will likely need a variable number of threads to make programming easier. I also like doing network I/O and large file I/O as separate threads so that they do not freeze up the system if delays occur. > 2. RelativeFind etc: I had considered caching info on the postmaster to > speed up the process of finding remote objects, and grouping requests for > remote-node element info. Yes, those things help. If the cache is required to give 100% accurate information (as opposed to hints with no guarantee of correctness), then cache consistency issues have to be dealt with, since elements can come into and out of existence. If elements are allowed to be movable between nodes, this gets messier. > 3. Parallel model building: I thought that almost all cases where this > would be critical would be through special calls like createmap, > region-connect, and perhaps copy. Most of these can be rather cleanly done > in parallel with minimal internode communication. However, a global > checkpoint would be needed to ensure synchrony between these calls. "region-connect" will almost certainly require a lot of internode communication. And while we can anticipate the most common patterns of connectivity (as GENESIS 2 did with planarconnect and volumeconnect), there will be a significant number of people who want to specify connections some other way, and they have to resort to connecting up many elements individually. We need to let them do this in a way that happens in parallel. > I should also add that the divide between setup time and runtime is > probably not so clean and we will definitely need to figure out efficient > ways of handling this. For example, in signalling simulations I have > already had issues where new organelles are budding off and being > destroyed at runtime. I think this is a very important point, and I completely agree. It would be possible to get better simulation performance if we assumed a sharp division between the model construction and simulation phases, and compiled the model down to a super-efficient simulatable form, but then this makes it more difficult to dynamically view or alter the models at runtime. And, as you mentioned, real cellular processes occur that are best modeled as structural changes to the model rather than numerical changes to already existing parameters. > To consider Greg's points: > > The greatest concern I have is with the many places in the basecode that > make an implicit assumption that elements are locally resident in the > nodes's memory, and that only one thread will be actively > > modifying them. (...) > > Some form of locking will thus be needed (probably on a per-Element basis). > Couldn't we put a lock at an appropriate place in an element tree, but > permit other element trees to be accessed safely ? As long as the element tree is located entirely on a single node, this should work. I don't think locking subtrees that are distributed over multiple nodes would be desirable because of performance and deadlock issues. > >The most troublesome situations will be when > > modifications are being made to the element tree, such as when new > >elements are being created or old ones destroyed. > Can we have a lock set whenever 'dangerous' commands are being executed? > Most commands at runtime are relatively safe. > > > One solution may be to standardize at the .mh level.... This approach > > might make sense if nearly all the > > visualization and other add-on code would be at the .mh level or higher, > > but not if those things require major changes to the existing basecode. > > I'm not sure what you have in mind here. To me it looks like all the > locking stuff should be done at the basecode level, so the user does not > need to know about it even if they are developing new objects using .mh > files. Could you expand on it? I wasn't suggesting that locking should be done in the .mh files. What I was suggesting is that the syntax/semantics of the .mh level should cleaned up and more or less frozen. This would allow development to proceed at the .mh level and higher (GUIs, new modeling primitives, solvers(?), etc.) using the current MOOSE kernel (with some modifications). We can then plug in a parallel-capable kernel at a later time and get everything to run on parallel systems. People writing at the .mh level and higher should be writing code that is independent of the hardware characteristics, such as the number of nodes available, or the relative performance of the nodes. An example of a change that would be necessary at the .mh level would be to use something like "ElementID" or "ElementHandle" instead of "Element *", because Element* makes an implicit assumption that the Element is located on the same node as where the code is executing. The current MOOSE could just define ElementID to Element*, but a parallel implementation could define it to something else (e.g.,pair<NodeID,uint64_t>). --Greg |