From: Casey J. <cas...@jo...> - 2011-11-30 17:44:00
|
and also: http://demo.exist-db.org/exist/configuration.xml?q=.%2F%2Fsection[ft%3Aquery%28.%2F%2Ftitle%2C%20%27scheduler%27%29]%20or%20.%2F%2Fsection[ft%3Aquery%28.%2C%20%27scheduler%27%29]&id=3.3.4.8#3.3.4.8 On Wed, Nov 30, 2011 at 12:39 PM, KUNDU, Raj <raj...@ou...> wrote: > Thanks Patrick for your help. I will do the search to accomplish my goal > stated earlier. If I succeed, I will post the process on the same thread.* > *** > > ** ** > > Thanks & Regards,**** > > ** ** > > Raj Kumar Kundu**** > > Oxford University Press**** > > raj...@ou... **** > > +44 (0)1865 353924 (Office)**** > > +44 (0)7790380875 (Cell)**** > > ** ** > > *From:* Patrick Bosek [mailto:pat...@jo...] > *Sent:* 30 November 2011 17:35 > > *To:* KUNDU, Raj > *Cc:* exi...@li... > *Subject:* Re: [Exist-open] Help Needed: Clustering does not work**** > > ** ** > > Hi Raj, > > Yes, I bet this is a very accomplish-able goal. Getting information in on > a daily interval shouldn't be hard at all, you have a number of options. > Search this mailing list for File2exist (or something like that), it was > posted a few days ago and sounded like an import library that might be > perfect for you. > > Then, for performance, (depending on your budget) I would recommend you go > take a look at Rackspace's cloud servers with their cloud load balancers. I > really hate recommending Rackspace, because their customer service is > completely awful, but I don't know of anyone else who provides a cloud load > balancing solution. > > Hope this helps. > > > Cheers, > > Patrick **** > > On Wed, Nov 30, 2011 at 12:23 PM, KUNDU, Raj <raj...@ou...> wrote:*** > * > > Hi Patrick,**** > > **** > > Thanks again for your quick response. **** > > Essentially, our business is interested in reading only. But eXist-db need > to be updated from external feed (average size 60MB) on regular interval > (most probably daily). The updated information need to be replicated across > all the instances. Do you think this can be handled through some custom > written java program?**** > > **** > > Thanks & Regards,**** > > **** > > Raj Kumar Kundu**** > > Oxford University Press**** > > raj...@ou... **** > > +44 (0)1865 353924 (Office)**** > > +44 (0)7790380875 (Cell)**** > > **** > > *From:* Patrick Bosek [mailto:pat...@jo...] > *Sent:* 30 November 2011 17:16**** > > > *To:* KUNDU, Raj > *Cc:* exi...@li... > *Subject:* Re: [Exist-open] Help Needed: Clustering does not work**** > > **** > > Hi Raj, > > I think this depends on your needs. Our business is very interested in a > clustering solution to provide a homogenous global system, and also for the > purposes of scalability. However, in our case, we have a lot of write > transactions, and a lot of people concurrently modifying the database, > which means transactions have to be distributed in real time. If you're > looking at eXist as a method of serving data, the problem might be a little > easier to solve. You could probably write some fairly simple Java to > replicate content to a delivery network of eXist instances in near real > time, then use load balancers to distribute your requests and get the scale > your looking for. > > > Cheers, > > Patrick**** > > On Wed, Nov 30, 2011 at 12:05 PM, KUNDU, Raj <raj...@ou...> wrote:*** > * > > Thanks Patrick for your input on this.**** > > Just out of my curiosity, do you know any other mechanism through which we > can sync-up two eXist-db instances?**** > > Can we use a central file system which can be used by two eXist-db > instances? If this is possible then will regular back of the file system > help in system recovery? Do you think this is somehow providing > scalability? In my opinion multiple eXist-db instances with central file > system will not bring in the scalability.**** > > **** > > Thanks & Regards,**** > > **** > > Raj Kumar Kundu**** > > Oxford University Press**** > > raj...@ou... **** > > +44 (0)1865 353924 (Office)**** > > +44 (0)7790380875 (Cell)**** > > **** > > *From:* Patrick Bosek [mailto:pat...@jo...] > *Sent:* 30 November 2011 16:45 > *To:* KUNDU, Raj > *Cc:* exi...@li... > *Subject:* Re: [Exist-open] Help Needed: Clustering does not work**** > > **** > > Hi Raj, > > To my knowledge, clustering isn't fully implemented in eXist. So, I don't > think this is going to work for you. > > > -Patrick**** > > On Wed, Nov 30, 2011 at 5:56 AM, KUNDU, Raj <raj...@ou...> wrote:**** > > Hello guys, > > I am working with eXist-db - 1.4.1-rev15155. I am able to access eXist-db > using REST service calls (GET, PUT, POST, DELETE). Now I am trying to do > the clustering configuration by following http://exist-db.org/cluster.html. > I have gone through the manual from JGroups site ( > http://www.jgroups.org/manual-3.x/html/index.html). I have two instances > of eXist-db running on my local machine on ports - 8899 and 8898. I have > the following cluster element defined in the conf.xml at exist_home for > both the eXist-db instances: > > *<cluster dbaPassword="###" dbaUser="admin" > exclude="/db/system,/db/system/config" > journalDir="webapp/WEB-INF/data/journal" > > protocol="UDP(mcast_addr=228.1.2.3;mcast_port=45566;ip_ttl=8;ip_mcast=true; > mcast_send_buf_size=800000;mcast_recv_buf_size=150000; > ucast_send_buf_size=800000;ucast_recv_buf_size=150000; > loopback=false): > > PING(timeout=2000;num_initial_members=3;up_thread=true;down_thread=true): > MERGE2(min_interval=10000;max_interval=20000): > > FD(shun=true;up_thread=true;down_thread=true;timeout=2500;max_tries=5): > > VERIFY_SUSPECT(timeout=3000;num_msgs=3;up_thread=true;down_thread=true): > > pbcast.NAKACK(gc_lag=50;retransmit_timeout=300,600,1200,2400,4800;max_xmit_size=8192;up_thread=true;down_thread=true): > > UNICAST(timeout=300,600,1200,2400,4800;window_size=100;min_threshold=10;down_thread=true): > > pbcast.STABLE(desired_avg_gossip=20000;up_thread=true;down_thread=true): > FRAG(frag_size=8192;down_thread=true;up_thread=true): > > pbcast.GMS(join_timeout=5000;join_retry_timeout=2000;shun=true;print_local_addr=true)"/> > * > > Still data is not getting replicated into another instance when I am > creating Collection or XML document in one instance. > > I am able to run the demo provided at > http://www.jgroups.org/manual-3.x/html/ch02.html#RunningTheDemo successfully.So, > I am not expecting any problem with multicasting. > > When I am running the eXist-db on debug mode I am able to see the > following log: > > *29 Nov 2011 18:25:42,494 [main] DEBUG (XQueryContext.java > [loadModuleClasses]:2899) - Configured module ' > http://exist-db.org/xquery/xqdoc' implemented > in 'org.exist.xqdoc.xquery.XQDocModule' > 29 Nov 2011 18:25:42,494 [main] DEBUG (Configuration.java > [configureCluster]:363) - *cluster*.protocol: > UDP(mcast_addr=228.8.8.8;mcast_port=7600;ip_ttl= > 8;ip_mcast=true; mcast_send_buf_size=800000;mcast_recv_buf_size=150000; > ucast_send_buf_size=800000;ucast_recv_buf_size=150000; loopback=false): > > PING(timeout=2000;num_initial_members=3;up_thread=true;down_thread=true): > MERGE2(min_interval=10000;max_interval=20000): FD(shun=true;up_thread= > true;down_thread=true;timeout=2500;max_tries=5): > VERIFY_SUSPECT(timeout=3000;num_msgs=3;up_thread=true;down_thread=true): > pbcast.NAKACK(gc_lag=50; > retransmit_timeout=300,600,1200,2400,4800;max_xmit_size=8192;up_thread=true;down_thread=true): > UNICAST(timeout=300,600,1200,2400,4800;window_size=10 > 0;min_threshold=10;down_thread=true): > pbcast.STABLE(desired_avg_gossip=20000;up_thread=true;down_thread=true): > FRAG(frag_size=8192;down_thread=tru > e;up_thread=true): > pbcast.GMS(join_timeout=5000;join_retry_timeout=2000;shun=true;print_local_addr=true) > 29 Nov 2011 18:25:42,495 [main] DEBUG (Configuration.java > [configureCluster]:368) - *cluster*.user: admin > 29 Nov 2011 18:25:42,496 [main] DEBUG (Configuration.java > [configureCluster]:373) - *cluster*.pwd: ### > 29 Nov 2011 18:25:42,497 [main] DEBUG (Configuration.java > [configureCluster]:378) - *cluster*.journalDir: > webapp/WEB-INF/data/journal > 29 Nov 2011 18:25:42,500 [main] DEBUG (Configuration.java > [configureCluster]:391) - *cluster*.exclude: [/db/system, > /db/system/config, /db/system/temp] > > 29 Nov 2011 18:25:42,501 [main] DEBUG (Configuration.java > [configureCluster]:399) - *cluster*.journal.maxStore: 65000 > 29 Nov 2011 18:25:42,502 [main] DEBUG (Configuration.java > [configureCluster]:406) - *cluster*.journal.shift: 100 > 29 Nov 2011 18:25:42,504 [main] INFO (eXistURLStreamHandlerFactory.java > [init]:53) - Succesfully registered eXistURLStreamHandlerFactory. > 29 Nov 2011 18:25:42,504 [main] DEBUG (Configuration.java > [configureValidation]:1190) - validation.mode: no > 29 Nov 2011 18:25:42,506 [main] DEBUG (Configuration.java > [configureValidation]:1196) - Creating eXist catalog resolver > 29 Nov 2011 18:25:42,511 [main] DEBUG (eXistXMLCatalogResolver.java > [<init>]:50) - Initializing > 29 Nov 2011 18:25:42,511 [main] DEBUG (Configuration.java > [configureValidation]:1204) - Found 1 catalog uri entries. > 29 Nov 2011 18:25:42,511 [main] DEBUG (Configuration.java > [configureValidation]:1205) - Using dbHome=C:\Raj\eXist > 29 Nov 2011 18:25:42,513 [main] DEBUG (Configuration.java > [configureValidation]:1218) - using webappHome=file:/C:/Raj/eXist/webapp/ > 29 Nov 2011 18:25:42,514 [main] INFO (Configuration.java > [configureValidation]:1233) - Add catalog uri > file:/C:/Raj/eXist/webapp//WEB-INF/catalog.xml > * > > Could anyone please tell me what wrong I am doing? What do I need to do to > make the clustering work fine on my local environment? > > Sometime I am getting following logs on the eXist-db console: > > *artzScheduler_Worker-1] DEBUG (SystemTaskManager.java > [runSystemTask]:64) - System task completed. > BUG (ConfigurationHelper.java [getExistHome]:55) - Got eXist home from > broker: C:\eXist > FO (Descriptor.java [<init>]:102) - Reading Descriptor from file > C:\eXist\descriptor.xml > BUG (ConfigurationHelper.java [getExistHome]:55) - Got eXist home from > broker: C:\eXist > ist\mime-types.xml > BUG (Collection.java [validateXMLResourceInternal]:1223) - Scanning > document /db/oup/Catalogue-good.xml > BUG (GrammarPool.java [retrieveInitialGrammarSet]:81) - Retrieve initial > grammarset (http://www.w3.org/TR/REC-xml). > BUG (GrammarPool.java [retrieveInitialGrammarSet]:85) - Found 0 grammars. > BUG (Collection.java [storeXMLInternal]:1041) - storing document 19113 ... > BUG (GrammarPool.java [retrieveInitialGrammarSet]:81) - Retrieve initial > grammarset (http://www.w3.org/TR/REC-xml). > BUG (GrammarPool.java [retrieveInitialGrammarSet]:85) - Found 0 grammars. > BUG (DefaultCacheManager.java [requestMem]:189) - Growing cache dom.dbx (a > org.exist.storage.cache.LRDCache) from 64 > BUG (DefaultCacheManager.java [requestMem]:189) - Growing cache dom.dbx (a > org.exist.storage.cache.LRDCache) from 96 > BUG (DefaultCacheManager.java [requestMem]:189) - Growing cache dom.dbx (a > org.exist.storage.cache.LRDCache) from 14 > BUG (Collection.java [storeXMLInternal]:1055) - document stored. > BUG (SystemTaskManager.java [runSystemTask]:61) - Running system > maintenance task: org.exist.storage.sync.SyncTask > BUG (SystemTaskManager.java [runSystemTask]:64) - System task completed. > BUG (SystemTaskManager.java [runSystemTask]:61) - Running system > maintenance task: org.exist.storage.sync.SyncTask > BUG (SystemTaskManager.java [runSystemTask]:64) - System task completed. > BUG (SystemTaskManager.java [runSystemTask]:61) - Running system > maintenance task: org.exist.storage.sync.SyncTask > BUG (SystemTaskManager.java [runSystemTask]:64) - System task completed. > artzScheduler_Worker-4] DEBUG (SystemTaskManager.java [runSystemTask]:61) > - Running system maintenance task: org.exi* > > > > *artzScheduler_Worker-4] DEBUG (SystemTaskManager.java > [runSystemTask]:64) - System task completed. > BUG (DefaultCacheManager.java [requestMem]:189) - Growing cache > collections.dbx (a org.exist.storage.cache.LRUCache) > BUG (DefaultCacheManager.java [requestMem]:189) - Growing cache > collections.dbx (a org.exist.storage.cache.LRUCache) > artzScheduler_Worker-3] DEBUG (SystemTaskManager.java [runSystemTask]:61) > - Running system maintenance task: org.exi* > > > Please let me know if you need any information from my end to investigate > this issue. Any type of help or clue will be helpful. I am stuck with this > problem for last two days. > > Thanks in advance for your time and help.**** > > **** > > Thanks & Regards,**** > > **** > > Raj Kumar Kundu**** > > Oxford University Press**** > > raj...@ou... **** > > +44 (0)1865 353924 (Office)**** > > +44 (0)7790380875 (Cell)**** > > **** > > Oxford University Press (UK) Disclaimer**** > > This message is confidential. You should not copy it or disclose its > contents to anyone. You may use and apply the information for the intended > purpose only. OUP does not accept legal responsibility for the contents of > this message. Any views or opinions presented are those of the author only > and not of OUP. If this email has come to you in error, please delete it, > along with any attachments. Please note that OUP may intercept incoming and > outgoing email communications.**** > > > > ------------------------------------------------------------------------------ > All the data continuously generated in your IT infrastructure > contains a definitive record of customers, application performance, > security threats, fraudulent activity, and more. Splunk takes this > data and makes sense of it. IT sense. And common sense. > http://p.sf.net/sfu/splunk-novd2d > _______________________________________________ > Exist-open mailing list > Exi...@li... > https://lists.sourceforge.net/lists/listinfo/exist-open**** > > > > > -- > Patrick Bosek > Jorsek Software > Cell (585) 820 9634 > Office (877) 492 2960 > Jorsek.com**** > > Oxford University Press (UK) Disclaimer**** > > This message is confidential. You should not copy it or disclose its > contents to anyone. You may use and apply the information for the intended > purpose only. OUP does not accept legal responsibility for the contents of > this message. Any views or opinions presented are those of the author only > and not of OUP. If this email has come to you in error, please delete it, > along with any attachments. Please note that OUP may intercept incoming and > outgoing email communications.**** > > > > > -- > Patrick Bosek > Jorsek Software > Cell (585) 820 9634 > Office (877) 492 2960 > Jorsek.com**** > > Oxford University Press (UK) Disclaimer**** > > This message is confidential. You should not copy it or disclose its > contents to anyone. You may use and apply the information for the intended > purpose only. OUP does not accept legal responsibility for the contents of > this message. Any views or opinions presented are those of the author only > and not of OUP. If this email has come to you in error, please delete it, > along with any attachments. Please note that OUP may intercept incoming and > outgoing email communications.**** > > > > > -- > Patrick Bosek > Jorsek Software > Cell (585) 820 9634 > Office (877) 492 2960 > Jorsek.com**** > > Oxford University Press (UK) Disclaimer > > This message is confidential. You should not copy it or disclose its > contents to anyone. You may use and apply the information for the intended > purpose only. OUP does not accept legal responsibility for the contents of > this message. Any views or opinions presented are those of the author only > and not of OUP. If this email has come to you in error, please delete it, > along with any attachments. Please note that OUP may intercept incoming and > outgoing email communications. > > > ------------------------------------------------------------------------------ > All the data continuously generated in your IT infrastructure > contains a definitive record of customers, application performance, > security threats, fraudulent activity, and more. Splunk takes this > data and makes sense of it. IT sense. And common sense. > http://p.sf.net/sfu/splunk-novd2d > _______________________________________________ > Exist-open mailing list > Exi...@li... > https://lists.sourceforge.net/lists/listinfo/exist-open > > -- -- Casey Jordan easyDITA a product of Jorsek LLC "CaseyDJordan" on LinkedIn, Twitter & Facebook (585) 348 7399 easydita.com This message is intended only for the use of the Addressee(s) and may contain information that is privileged, confidential, and/or exempt from disclosure under applicable law. If you are not the intended recipient, please be advised that any disclosure copying, distribution, or use of the information contained herein is prohibited. If you have received this communication in error, please destroy all copies of the message, whether in electronic or hard copy format, as well as attachments, and immediately contact the sender by replying to this e-mail or by phone. Thank you. |