Finally getting around to working on this... but it's actually much more complicated than I thought.
My idea was to simply use something like:
counter + (std::numeric_limits<unsigned long int>::max() / n_processors()) * processor_id()
as the unique ID. Where "counter" is increased every time a DofObject gets created.
Unfortunately that doesn't work.
Here's the problem: DofObject is NOT a ParallelObject and doesn't know what processor_id it's assigned to when it's created. Therefore we are missing two critical pieces of information that are needed for creating a unique ID at the time a DofObject is created:
1. How many total processors there are.
2. What processor this DofObject is assigned to in the beginning
Not only that - but with ParallelMesh you have to be careful to assign the same ID on every processor...
So - I started looking for where the right place was to assign a unique ID... and it's looking like a callback to Mesh from the Partitioner ( in partition() ) might not be a bad idea. Maybe something like virtual void MeshBase::assign_unique_dof_ids().
In the case of SerialMesh this isn't a problem because it can just loop over all DofObjects in the Mesh in the same order on every processor and assign a unique ID using just a "serial" counter. If any new Elements are added after partition() they will get the next number... etc. Easy.
However, for ParallelMesh things are not so easy. All processors can't loop over all the objects in the very same way - so the scheme for SerialMesh is out. The "counter" scheme (outlined at the beginning) seems like a good idea - but there is still one problem: processor_id is assigned to elements and nodes independently.... and in a two stage process (look in Partitioner::partition() ).
I can definitely assign the unique_ids for the elements _just_ after set_parent_processor_ids(mesh) in Partioner::partition(). The cool thing is that redistribute() is called right after that which will cause that unique_id to get packed up along with the rest of the Elem and sent wherever it needs to go... so everything will work fine.
The bad part is that the Nodes _also_ get packed up and sent during redistribute()... which means that if we set the unique_id on the nodes _after_ redistribute() we'll have to go through a second phase of renegotiation to make sure those are consistent across all processors. However, it's not until _after_ redistribute that the processor_id() gets set for nodes! So we can't set the unique_id for nodes before redistribute...
Am I way off base here?