Menu

Help with reloading of the target node

Help
SS_sym
2017-05-03
2017-05-03
  • SS_sym

    SS_sym - 2017-05-03

    Hi,
    I am new to SymmetricDS. I would appreciate any help with the issue I m facing .
    I have master client setup. Master node 'corp' pushes the data to client node 'store'. The sync is working fine real time. However the clinet node was down for a while and I saw issues with syncing of tables after it was up again. In my target database, I am syncing three tables, each assigned to different channels. Now I want to reload the 'store' node, delele all the rows in all the three tables and sync/reload them again with corresponding tables in source database.
    In my corp-000.properties, i have these three properties:
    auto.reload=true
    initial.load.delete.first=true
    initial.load.delete.first.sql=delete from %s

    When I run the command :
    bin/symadmin --engine corp-000 reload-node 002

    I see this message: Successfully enabled initial load for node 002
    However when I start SymmetricDS instances on both master and client, it doesn't delete the data in target tables and reloads them. I truncated the target tables manually, now it starts syncing with new data only (i want it to load al the data (including old data that i deleted) from source tables).
    Am I missing any configuration here. After manually truncating the target tables, is there any way I can force reloading/replication of all data.

    Thanks
    Ss

     
  • Steve

    Steve - 2017-05-03

    Send the result of below query (run at corp) :
    select * from sym_outgoing_batch where channel_id='reload' and node_id='002'.

    1. Which databse you are using at corp & store
    2. Do the replicated tables have primary keys ?
     
  • SS_sym

    SS_sym - 2017-05-03

    Hi,
    Thanks for response. When I execute this query, I see no rows being returned from sym_outgoing_batch.
    1) I am using databse postgres(9.5) on both corp and store side.
    2) The replicated tables have primary keys

    Also, for the three tables, I have defined seperate channels with this configuration:
    channel_id processing_order max_batch_size max_batch_to_send max_data_to_route
    table1_channel 1 1000 1 100000
    table2_channel 2 10000 1 100000
    table3_channel 3 100 1 100000

    I want table1 to be sent first followed by table 2 and then table 3. And even though the records in table 2 are in thousands, I see that batches are of very small size :

    DataLoaderService - 32 data and 3 batches loaded during push request from corp:000:000.

    How can I force to send large batch size. Since table 3 has dependency on table2. If the rows of table row are not inserted first, My lookup transform fails for table 3.

    Thanks!

     
  • Steve

    Steve - 2017-05-04

    Hi,

    If there are no reload event rows then it means the reload event was never created so the node wont be reloaded.

    Are you using command prompt to do node reload ? If yes, then make sure to "run as admin".
    Also, check for any errors in the log file.

     
  • SS_sym

    SS_sym - 2017-05-04

    Yes, I am using this command (logged in as admin):
    bin/symadmin --engine corp-000 reload-node store-002

    It says that ;Successfully enabled initial load for node 002
    But I see no entry in sym_outgoing_batch table for it.

    Is there any other command to reload the entire node after deleting all the data on target node?

    However if I run reload-table, it works fine. This command works fine and I see entry in sym_outgoing_batch for it with status OK:
    * bin/symadmin --engine corp-000 reload-table table1 table 2 --node 002

    Thanks!

     

Log in to post a comment.