Not currently. Our current recommendation for this problem is to use a Transform Map with a JDBC Data Source. Uploading and downloading are completely different problems. Because our SQL tables are created by the DataPump, the column names are guaranteed to match and the primary key is always sys_id. Uploads to ServiceNow require a field map and method for identifying the primary key. We are considering an enhancement to the DataPump that would support uploads by leveraging the functionality of Web Services Import Sets.
This field is only used in situations where the SQL table and the ServiceNow have different names. Normally the names would be the same, i.e. <t>cmdb_ci_computer</t> maps to cmdb_ci_computer.
No. The DataPump only generates DDL if a table does not already exist in the target database. If you add a field to a ServiceNow table, then you must manually add a field with the same name to the SQL table. Once the field has been added, the DataPump will update it.
Simply drop the field from the SQL table. As part of its start-up process, the DataPump compares the field names between the source and the target tables. It only replicates those fields where it finds a match.
If a signal file is defined in the properties file, then the DataPump will check for existence of this file after every SOAP call, or once per second while it is sleeping. If the file is detected, the application will generate an internal InterruptedException. This provides a clean mechanism to shut down the application if it is in the middle of long-running job set. The -cancel
command link option simply creates the signal file and terminates, allowing for any other running instance of the application to detect that the file has been created. The -resume option can be used to resume a cancelled jobset.
Surprisingly good given that SOAP such a verbose protocol. If the initial seeding of historical data takes a long time to complete, then "So what?" Seeding is a one-time process. The key is to design an efficient on-going replication process that does not repeatedly download the same data from the ServiceNow instance. ServiceNow is great for this, because all records are stamped with an update time. (Just make sure that sys_updated_on is indexed.) The DataPump "Refresh" job keeps track of the last time it was run, and only pulls the new or changed records. Deletes are rare in our ServiceNow implementation. Under most circumstances we do not allow non-administrators to delete tasks or configuration items. However, in those cases where deletes are permitted, use a "Prune" job to find them. "Prune" finds deleted records by scanning sys_audit_delete. (Again, just make sure that sys_audit_delete is properly indexed.)