As complex as this is, I do not have much to say!  The only comment that I have is:

I would like to see the changes, edits, modifications, etc to the database to be in a git repository...

But then we must think of the normal user too, are they going to be able to use our software if we add an increased level of technology to it?

Will they will willing to use it if they do NOT understand about a git repo...

I was also thinking that for a user to make use of gramps-connect in the first place would require a certain level of technology any ways!

Sincerely yours,
Rob G. Healey

On Mon, Jun 18, 2012 at 7:26 AM, Doug Blank <> wrote:

First, is coming along (and kept up-to-date online):

* can browse all data
* can edit all of the core data on all of the main objects
* can delete all of the main objects (currently just deletes, no
warning, no undo)
* can edit main parts of names, surnames
* can run any report, import, and export from the web (no tools though)
* three levels of permissions:
** not logged in: only see non-private, non-living data
** logged in, but not superuser: can see all data, export and run reports
** logged in, superuser: can edit, delete, import data
* can edit notes with markup
* can add children to families, events to people, etc
* can change CSS of site
* can change site name

This is of course still very much alpha, but, I've put by family tree
on line and have started doing simple edits. It tastes like dogfood,
but either I'm getting use to it, or it gets a little better every day

One of the first things that one wants to do is merge the changes made
on-line with a master database. We all have made some initial notes

Now, I'm seriously thinking about how to do this, perhaps starting
with something simple. I'm thinking that there are three different

1) diff and patch: keep track of all edits, deletes, and additions and
create a type of patch file that gets applied to another database.

2) subversion: there is a master database, and all patches are
incorporated there. A special re-sync could get sent out to
checked-out versions.

3) git: all databases are full repositories, and can be forked, merged.

Perhaps starting with #1 is the easiest, and could lead to the others.
But even with that option, there appears to be additional data that
needs to be kept. For example, say I delete an object in a database;
how do I keep track of that, to be able to send it to the other

At a bare minimum, it seems like we need a persistent representation of:

date-time, object-type, handle, change-made, commit-message
2012/6/1 11:00:00, person, 34763478324, deleted, Duplicate person
2012/6/1 12:00:00, person, 23984737847, created, Research on Monday
2012/6/1 12:00:00, source, 38734763786, created, Research on Monday
2012/6/1 12:00:00, citation, 34834767346, created, Research on Monday
2012/6/1 12:01:00, person, 23984737847, edited, Typo on given name

(The commit message is not strictly needed, but I am finding it to be
quite useful.) From this it seems that a patch-like file (written in
xml?) could be made, given a start date-time. Applying the patch may
come into conflicts, but that is a separate issue, I think. We could
also include more information here in the persistent storage (for
example, before and after serializations).

So, if this sounds correct, where/how should the data be stored?
Perhaps just a text file that we can append onto would be safe and
sturdy? Should we reuse the XML representation of the data for the
patch? That sounds best, as we already can read/write those. But a
json file would be easy, too. (Could just use raw Python
serialization, but that could get messy when dealing with database

Comments, ideas welcomed,


Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and
threat landscape has changed and how IT managers can respond. Discussions
will include endpoint security, mobile security and the latest in malware
Gramps-devel mailing list

Sincerely yours,
Rob G. Healey