From: Bryan T. <br...@sy...> - 2014-11-07 20:05:04
|
There is an ExportKB utility. It is described on the follow wiki page: - DataMigration <http://wiki.bigdata.com/wiki/index.php/DataMigration> (A page dedicated to data migration.) See http://wiki.bigdata.com/wiki/index.php/DataMigration#Export The HAJournal supports online backup. This provides a compressed copy of the journal that is consistent with the state of the journal as of the last commit point when the backup was requested. This can be used in the HA1 mode (without replication) to have online backups and incremental transaction logs. The incremental transaction logs are the write set for each commit point together with the opening and closing root block. The HARestore utility may be used to decompress a given journal file and apply the incremental logs in order to roll forward to a given commit point. Thanks, Bryan ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://bigdata.com http://mapgraph.io CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. On Fri, Nov 7, 2014 at 12:44 PM, Jeremy J Carroll <jj...@sy...> wrote: > > If we want a backup copy of the store, I can use various sparql construct > calls to extract each named graph, but is there a simpler dump-to-trig > option somehow? > > Jeremy > > > > > ------------------------------------------------------------------------------ > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > |