in the basemodule :s well so I guess this needs a bit more think still .. or not..
in fact ; I don't see clearly why the manage_brok method of the modules needs to always have a brok where the main object concerned is transformed into a dict in the brok.data field/attribute..
for livestatus at least but possibly for all modules..
I'm afraid I'm a bit lost with what is "serialized" (for transport between (eventually "remote") processes) / when / how / why ..
In fact the broks are just dict generated in the schedulers with some properties from objects. So downtime is a property, so the downtime object will be add in it too (the whole object pointer).
Then the broker get them by Pyro call. It's not the same process, so Pyro ask a pickle pass. All objects go in __getstate__ functions to have "something" (a dict in fact). Then the serial object (the dict) is send on the network, and when the broker get it, it call the __setstate__ with it (un-dict the values). So in our brok, we will have a pickle pass of the original dict. Our Downtime object will be a __getstate__ of the original one (we just removed the .ref in fact here). So the new object is something like a deepcopy of the original one on the scheduler process.
Here in the test, we got the same process, so there is no pickle pass when we send the brok, and the Downtime we load in the module is the original one. So when we change its ref or something like that, it change the original one, and we got problem with the rest of the code. So a solution is to call deepcopy() on the brok before manage_it, but only in our test_livestatus update_broks function because its the only one that got the problem. Real world app do not have this, only our test :p