Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo
James M. Moe
I am in the initial stage of implementing XML to store profile and object data instead of a rather limited proprietary format. The profile data is very minimal; typically the files are about 350 bytes. The object data, though, can vary from a 100 bytes to several MB; the object data contains a list of objects that can vary from 0 to 1000's.
The objects are create thus:
At the moment there is not much optimization regarding I/O. Each time an object is added or removed, the entire XML document is recreated and saved. It would be better to add/remove a node instead recreating the whole node list every time. Nevertheless…
The problem I am seeing is that as an XML doc is repeatedly saved, the save time progressively increases far faster than the number of objects or file size does. The CPU usage increases dramatically, which explains the increased save time. The number of save repetitions is about 50 before the save time becomes obviously greater. The save time increases up to a factor of 40. The situation persists until the program is restarted. Then the save time for the very same XML doc, with the same number of objects, drops back to "normal."
Can anyone suggest what may be happening here?