I am having a trouble generating debug level logs with hajdbc.
Currently all my hajdbc logs are geting dumped in the catalina.out. which is fine for me but this is very high INFO level logs only.
i have tested this by having a single node and dual node scenario in the cluster.
Details of the environment are
JDK 1.6
ha-jdbc-2.0.16-rc-1-jdk1.6.jar
my catalina logging.properties file is as follows:-
handlers = 1catalina.org.apache.juli.FileHandler, 2localhost.org.apache.juli.FileHandler, 3manager.org.apache.juli.FileHandler, 4host-manager.org.apache.juli.FileHandler, java.util.logging.ConsoleHandler
.handlers = 1catalina.org.apache.juli.FileHandler, java.util.logging.ConsoleHandler
############################################################
# Handler specific properties.
# Describes specific configuration info for Handlers.
############################################################
1catalina.org.apache.juli.FileHandler.level = FINE
1catalina.org.apache.juli.FileHandler.directory = ${catalina.base}/logs
1catalina.org.apache.juli.FileHandler.prefix = catalina.
2localhost.org.apache.juli.FileHandler.level = FINE
2localhost.org.apache.juli.FileHandler.directory = ${catalina.base}/logs
2localhost.org.apache.juli.FileHandler.prefix = localhost.
3manager.org.apache.juli.FileHandler.level = FINE
3manager.org.apache.juli.FileHandler.directory = ${catalina.base}/logs
3manager.org.apache.juli.FileHandler.prefix = manager.
4host-manager.org.apache.juli.FileHandler.level = FINE
4host-manager.org.apache.juli.FileHandler.directory = ${catalina.base}/logs
4host-manager.org.apache.juli.FileHandler.prefix = host-manager.
java.util.logging.ConsoleHandler.level = FINE
java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter
############################################################
# Facility specific properties.
# Provides extra control for each logger.
############################################################
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].level = ALL
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].handlers = 2localhost.org.apache.juli.FileHandler
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/manager].level = ALL
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/manager].handlers = 3manager.org.apache.juli.FileHandler
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/host-manager].level = ALL
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/host-manager].handlers = 4host-manager.org.apache.juli.FileHandler
# For example, set the org.apache.catalina.util.LifecycleBase logger to log
# each component that extends LifecycleBase changing state:
#org.apache.catalina.util.LifecycleBase.level = FINE
# To see debug messages in TldLocationsCache, uncomment the following line:
#org.apache.jasper.compiler.TldLocationsCache.level = FINE
net.sf.hajdbc.level = ALL
org.jgroups.level = ALL
org.apache.catalina.level=ALL
The relevant output in the Catalina.out is something like this:-
First of all, I recommend using version 3.0.1 of HA-JDBC.
HA-JDBC 2.0.x used the SLF4J logging framework for logging. To integrate SLF4J with commons-logging or log4j, you need to include the appropriate slf4j adapter jar. See: http://www.slf4j.org/manual.html
HA-JDBC 3.0 uses a logging abstraction that will automatically integrate with whatever logging provider your server/application users.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I had already attached logging.properties file in my previous post. (in case you want to have a look at it.)
I don't think i am having issues configuring the slf4j with my application since i am successfully able to get the logs for the above mentioned in my log4j.xml i.e.
1) haz.log
2) resc.log
3) jgrp.log
but every time my hajdbc.log file which does get created but is of size 0 bytes and all is see in catalina.out related to net.sf.hajdbc (as posted in my previous post) is a very brief info about hajdbc clustering.
Do you think if i am missing something very particular - setting/configuration that needs to be set for hajdbc? Thanks.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
You're using the wrong slf4j binding library. It seems you want to use log4j for logging, but you've chosen the jdk logging provider for slf4j. Use the slf4j-log4j12.jar instead.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Thanks for the hint.
I am now using the slf4j-log4j12.jar. However, I guess I have found why I am unable to get hajdbc debug logs.
I am getting hajdbc INFO logs as follows:-
~~~~~~
2014-04-08 09:03:45,824 INFO [Incoming-2,tcp,192.168.2.76:7800] net.sf.hajdbc.distributable.DistributableStateManager - 192.168.2.77:7800 joined re channel
2014-04-08 09:03:45,911 INFO [pool-2-thread-1] net.sf.hajdbc.distributable.DistributableStateManager - Using remote initial cluster state [database1, database2] from 192.168.2.77:7800.
2014-04-08 10:09:29,163 INFO [pool-2-thread-1] net.sf.hajdbc.sql.AbstractDatabaseCluster - Initializing HA-JDBC 2.0.16-rc-1 from file:/opt/apache-tomcat-7.0.23/lib/ha-jdbc-re.xml
2014-04-08 10:09:30,144 INFO [Incoming-1,tcp,192.168.2.76:7800] net.sf.hajdbc.distributable.DistributableStateManager - 192.168.2.77:7800 joined re channel
2014-04-08 10:09:30,230 INFO [pool-2-thread-1] net.sf.hajdbc.distributable.DistributableStateManager - Using remote initial cluster state [database1, database2] from 192.168.2.77:7800.
I am having a trouble generating debug level logs with hajdbc.
Currently all my hajdbc logs are geting dumped in the catalina.out. which is fine for me but this is very high INFO level logs only.
i have tested this by having a single node and dual node scenario in the cluster.
Details of the environment are
JDK 1.6
ha-jdbc-2.0.16-rc-1-jdk1.6.jar
my ha-jdbc-re.xml file is as follows:-
my catalina logging.properties file is as follows:-
The relevant output in the Catalina.out is something like this:-
Any idea what i am missing or doing wrong that causing this issue. Thanks.
Last edit: himanshu jain 2014-04-07
First of all, I recommend using version 3.0.1 of HA-JDBC.
HA-JDBC 2.0.x used the SLF4J logging framework for logging. To integrate SLF4J with commons-logging or log4j, you need to include the appropriate slf4j adapter jar. See:
http://www.slf4j.org/manual.html
HA-JDBC 3.0 uses a logging abstraction that will automatically integrate with whatever logging provider your server/application users.
Thanks for the response Paul.
Unfortunately, at this stage I don't think we can afford to migrate to the higher version.
Regarding Ha-JDBC 2.0.x I am using SLF4J(slf4j-api-1.6.6.jar, slf4j-jdk14-1.6.6.jar and log4j.jar).
My log4j.xml file is as follows:-
I had already attached logging.properties file in my previous post. (in case you want to have a look at it.)
I don't think i am having issues configuring the slf4j with my application since i am successfully able to get the logs for the above mentioned in my log4j.xml i.e.
1) haz.log
2) resc.log
3) jgrp.log
but every time my hajdbc.log file which does get created but is of size 0 bytes and all is see in catalina.out related to net.sf.hajdbc (as posted in my previous post) is a very brief info about hajdbc clustering.
Do you think if i am missing something very particular - setting/configuration that needs to be set for hajdbc? Thanks.
You're using the wrong slf4j binding library. It seems you want to use log4j for logging, but you've chosen the jdk logging provider for slf4j. Use the slf4j-log4j12.jar instead.
Hi Paul,
Thanks for the hint.
I am now using the slf4j-log4j12.jar. However, I guess I have found why I am unable to get hajdbc debug logs.
I am getting hajdbc INFO logs as follows:-
~~~~~~
2014-04-08 09:03:45,824 INFO [Incoming-2,tcp,192.168.2.76:7800] net.sf.hajdbc.distributable.DistributableStateManager - 192.168.2.77:7800 joined re channel
2014-04-08 09:03:45,911 INFO [pool-2-thread-1] net.sf.hajdbc.distributable.DistributableStateManager - Using remote initial cluster state [database1, database2] from 192.168.2.77:7800.
2014-04-08 10:09:29,163 INFO [pool-2-thread-1] net.sf.hajdbc.sql.AbstractDatabaseCluster - Initializing HA-JDBC 2.0.16-rc-1 from file:/opt/apache-tomcat-7.0.23/lib/ha-jdbc-re.xml
2014-04-08 10:09:30,144 INFO [Incoming-1,tcp,192.168.2.76:7800] net.sf.hajdbc.distributable.DistributableStateManager - 192.168.2.77:7800 joined re channel
2014-04-08 10:09:30,230 INFO [pool-2-thread-1] net.sf.hajdbc.distributable.DistributableStateManager - Using remote initial cluster state [database1, database2] from 192.168.2.77:7800.
ha-jdbc-2.0.16-rc-1\src\net\sf\hajdbc\sync\DifferentialSynchronizationStrategy.java
00145: logger.debug(selectSQL);
00169: logger.debug(deleteSQL);
00176: logger.debug(insertSQL);
00187: logger.debug(updateSQL);
ha-jdbc-2.0.16-rc-1\src\net\sf\hajdbc\sync\FullSynchronizationStrategy.java
00121: logger.debug(deleteSQL);
00135: logger.debug(insertSQL);
ha-jdbc-2.0.16-rc-1\src\net\sf\hajdbc\sync\SynchronizationSupport.java
00085: logger.debug(sql);
00115: logger.debug(sql);
00152: logger.debug(sql);
00212: logger.debug(sql);
00242: logger.debug(selectSQL);
00268: logger.debug(alterSQL);
00302: logger.debug(sql);
00332: logger.debug(sql);
~~~~~~~
So in short I was trying to look for something which isn't there. my bad luck :)
Thanks again for the help.
Last edit: himanshu jain 2014-04-08