You can subscribe to this list here.
2007 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
(1) |
Oct
(4) |
Nov
(4) |
Dec
(3) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2008 |
Jan
(7) |
Feb
(20) |
Mar
(2) |
Apr
(6) |
May
(10) |
Jun
(1) |
Jul
(2) |
Aug
(3) |
Sep
(7) |
Oct
(1) |
Nov
(4) |
Dec
(2) |
2009 |
Jan
(2) |
Feb
|
Mar
(4) |
Apr
|
May
(9) |
Jun
(9) |
Jul
(2) |
Aug
(2) |
Sep
(5) |
Oct
|
Nov
|
Dec
|
2010 |
Jan
|
Feb
(3) |
Mar
|
Apr
(7) |
May
(9) |
Jun
(4) |
Jul
(5) |
Aug
|
Sep
(6) |
Oct
(2) |
Nov
(10) |
Dec
(12) |
2011 |
Jan
(22) |
Feb
(6) |
Mar
(2) |
Apr
(1) |
May
(7) |
Jun
(2) |
Jul
(1) |
Aug
(4) |
Sep
(1) |
Oct
(2) |
Nov
(2) |
Dec
|
2012 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(3) |
Jun
|
Jul
|
Aug
(2) |
Sep
(2) |
Oct
|
Nov
|
Dec
|
2013 |
Jan
(1) |
Feb
|
Mar
|
Apr
(1) |
May
(7) |
Jun
|
Jul
(3) |
Aug
(5) |
Sep
(2) |
Oct
|
Nov
|
Dec
(1) |
2014 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Guijarro, J. <jul...@hp...> - 2008-02-18 18:31:58
|
Hi Dave, Are you running the latest version of SmartFrog? If this is the case then you need to use JDK 1.5. We abandoned support for 1.4 after version 3.12.000 because some potential problems with concurrency in the 1.4 VM architecture and because we thought there was almost nobody using 1.4 anymore. Are you constrained by JDK 1.4? I am asking because I just read in your previous email that you were planning to migrate everything to 1.4 JDK. At the moment there is very little 1.5 only code in the SF core but some of the newer components are being written for 1.5 only. I think the problem you are getting could be related to 1.4 failing to load the 1.5 java classes; but having tried to compile with 1.5 and run with 1.4 I get a different message: ...\smartfrog\dist>bin\sfDaemon Exception in thread "main" java.lang.UnsupportedClassVersionError: org/smartfrog/SFSystem (Unsupported major.minor version 49.0) at java.lang.ClassLoader.defineClass0(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:539) ... Could you try to run "sfDaemon -d > sfdiag.txt" and send us the result to see it we can see what could be wrong? I don't know if this will produce any result because you seem to be getting some kind of bytecode or jvm problem but it is worth it to try it. Regards, Julio G. PS: I haven't replied to your previous email yet because it requires some proper reading and I am just caching up with my email after a short break. I will try to reply to it tomorrow. -----Original Message----- From: sma...@li... [mailto:sma...@li...] On Behalf Of David Brown Sent: 17 February 2008 19:39 To: sma...@li... Subject: [Smartfrog-users] Noob and SmartFrog Tutorial sfDaemon does not start with exceptions et. al. Hello Steve, commit-folks, dev, gurus and users, I overshot my first email to this list by a considerable margin. I am back with slightly less ambitious intent. I will start humbly with the SmartFrog Tutorial example: helloworld. I have successfully compiled SF at the command-line against: dist, build and all. I have successfully pulled the ant target: <published> into Eclipse and I am able to compile the example target <kernel-dist> for the helloworld example. Following the instructions on page 3 of the SF Tutorial to command-line start the sfDaemon... A beginner's configuration is to have your Java classes on your local host, and to start the sfDaemon script manually from a command-line shell. (julio.guijarro) ..I am getting the following exception included below. The sfDaemon is pre-requisite to anything I hope to do with SF. ERROR: Could not use org.smartfrog.iniFile : Unresolved compilation problem: Exception in thread "main" java.lang.Error: Unresolved compilation problem: at org.smartfrog.sfcore.logging.LogFactory.sfGetProcessLog(LogFactory.java:192) at org.smartfrog.SFSystem.sfLog(SFSystem.java:704) at org.smartfrog.SFSystem.initSystem(SFSystem.java:637) at org.smartfrog.SFSystem.execute(SFSystem.java:404) at org.smartfrog.SFSystem.main(SFSystem.java:392) I looked at the JavaBean LogFactory source but I do not understand or fail to see what is wrong by inspecting the source. Ultimately my goal is the dynamicwebserver example and onward to a dynamicservletserver e.g. Tomcat or JBoss. The particulars follow: OS: Windows XP (dev), Debian 3.1 (testing). JDK: 1.4.x (Windows and Debian 3.1). Eclipse: 3.3 (Windows XP) Please advise, David. |
From: David B. <da...@da...> - 2008-02-17 19:39:31
|
Hello Steve, commit-folks, dev, gurus and users, I overshot my first email to this list by a considerable margin. I am back with slightly less ambitious intent. I will start humbly with the SmartFrog Tutorial example: helloworld. I have successfully compiled SF at the command-line against: dist, build and all. I have successfully pulled the ant target: <published> into Eclipse and I am able to compile the example target <kernel-dist> for the helloworld example. Following the instructions on page 3 of the SF Tutorial to command-line start the sfDaemon... A beginners configuration is to have your Java classes on your local host, and to start the sfDaemon script manually from a command-line shell. (julio.guijarro) ..I am getting the following exception included below. The sfDaemon is pre-requisite to anything I hope to do with SF. ERROR: Could not use org.smartfrog.iniFile : Unresolved compilation problem: Exception in thread "main" java.lang.Error: Unresolved compilation problem: at org.smartfrog.sfcore.logging.LogFactory.sfGetProcessLog(LogFactory.java:192) at org.smartfrog.SFSystem.sfLog(SFSystem.java:704) at org.smartfrog.SFSystem.initSystem(SFSystem.java:637) at org.smartfrog.SFSystem.execute(SFSystem.java:404) at org.smartfrog.SFSystem.main(SFSystem.java:392) I looked at the JavaBean LogFactory source but I do not understand or fail to see what is wrong by inspecting the source. Ultimately my goal is the dynamicwebserver example and onward to a dynamicservletserver e.g. Tomcat or JBoss. The particulars follow: OS: Windows XP (dev), Debian 3.1 (testing). JDK: 1.4.x (Windows and Debian 3.1). Eclipse: 3.3 (Windows XP) Please advise, David. |
From: David B. <da...@da...> - 2008-02-15 21:39:02
|
Hello Steve, they who commit, developers, gurus and all users of smartfrog. Steve replied to an email from me with a 3 step plan as a solution of the problem I am presenting below. This email is step #2 of 3. Thanks in advance and please advise, David. My current gig: * Large corporation: 25k+ employees (based on user population). * 3 web applications and 1 web service (Apache Axis 1.3) running under: Tomcat or JBoss. * Projected user population: 25k. All applications are internal. * Multiple disparate version JDKs: 1.3.x, 1.4.x * Multiple disparate version JEE servlet containers (Tomcat/JBoss). * Multiple disparate versions of the d/b (MS SQL Server 2003, 2005) supporting all development and production instances. Planned but no separate testing of the d/b. * No SLA from any of the vendors so software support availability is unknown. * No source code. Only Ant build.xml to deploy (or copy and deploy) and then start the associated app servers from (dot).war or .zip binaries. * Current tools for testing and monitoring: JMeter and possibly: MC4J, JConsole, JHat, JStack, JStat. * Current completed testing: for purposes of creating a baseline metric for all target applications I have created a JMeter Test Plan using the JMeter distributed local JMeter master / remote JMeter slave model. This model works quite well and is easy to deploy manually but since JMeter has concurrency problems with multiple ThreadGroup users only a single user per (hardware) slave is possible. Drawbacks incurred from the previous statement: for a distributed model to emulate a real-time/real-world production model many actual machines are required as slaves to emulate many distributed users. Only 1 of 4 applications have been tested using the JMeter distributed slave Test Plan and as of yet I do not have a viable baseline metric for any of the target applications mentioned above. This is what the client wants: consolidation of all web apps and services under a single JBoss installation with scaling for all users. This is what I think is reality: 1) upgrade 1 application from JDK 1.3 and Tomcat 4.x to JDK 1.4.x and TC 5.5.x. Purpose of upgrading the one application is to put all applications on the same JDK and Tomcat version plane. 2) Instead of multiple TC/JBoss instances for clustering/load-balancing I would like to employ smartfrog for some type of virtualized machine deployment of the bullets described above. Does this mean: VMWare? Xen? Globus Toolkit?. The client claims they are currently using VMWare. 3) Use smartfrog for the baseline testing to generate an initial metric of the existing system(s). 4) Migrate all applications and servlet containers to the newly created virtual machine from step 2. 5) Use smartfrog for load/stress testing of all target applications after the successful migration, deployment and startup of all applications. Purpose: to generate a comparative metric against the initial baseline metric establised in step 3. 6) Document fully all 5 steps above and present all results with smartfrog as the framework rock-star-hero. Rants and raves welcomed. |
From: Steve L. <ste...@hp...> - 2008-02-15 11:19:56
|
SmartFrog 3.12.022 is now up on the sourceforge site, with Maven/Ivy compatible artifacts hosted at http://smartfrog.sourceforge.net/repository/ Here is the text release announcement; the attached HTML announcement includes links to the relevant JIRA issues SmartFrog 3.12.022 ====================== This is a new release of SmartFrog, the Java-based, LPGL-licensed distributed deployment framework developed by HP Laboratories. SmartFrog enables applications to be deployed across multiple machines, configuring different aspects of the system so that they are all consistently configured, and managing the life-cycle of the application as a whole. The project's home page is http://smartfrog.org/ The release artifacts are available at http://sourceforge.net/project/showfiles.php?group_id=87384&package_id=176308 This release is 3.12.022; built from revision 5963 of the SVN repository. This release has an extended language with the ability to tag attributes, and includes the following items: * Core smartfrog daemon, including services to manage files, start and stop Java and native programs. * Example components and applications. * Ant support: ant tasks to deploy and terminate applications from a build. * Ant components: the ability to execute ant tasks in a deployment. * Anubis: a partition aware tuple-space that can be used to implement fault tolerant systems. * Database: components to issue database commands, and deploy HSLDB and MySQL. * JMX: the ability to configure and manage JMX components, and to manage SmartFrog components over JMX. * Logging: integration with Apache commons-logging and Log4J * Networking: email, FTP, SSH, DNS support. * Quartz: scheduled operations using Quartz libraries. * Scripting: support for BSF-hosted scripting languages * Testing: Distributed JUnit and component testing with SFUnit. * WWW: deployment of WAR and EAR files to application servers. deploy-by-copy is provided for all application servers that support it, and sample templates are provided to start and stop Tomcat and JBoss. The Jetty component can configure and deploy individual servlets, eliminating much of the need for WAR files and application servers. * XML: XML support with XOM. * XMPP: Presence and messaging over Jabber. Packaging ========= This release is available as: * RPM files inside a .tar.gz file. * a JAR installer. * the original core smartfrog distribution as .zip and .tar.gz (deprecated) The RPM installation is for RPM-based Linux systems. The archive contains the following RPM files: smartfrog: the core SmartFrog distribution. smartfrog-daemon: the shell scripts to add the smartfrog distribution to the path, and to run the daemon on start-up. smartfrog-demo: example code and documentation. smartfrog-javadocs: javadocs for the project smartfrog-ant: Ant task and build file execution smartfrog-anubis: Distributed partition-aware tuple space smartfrog-database: Database access smartfrog-jmx: JMX integration though MX4J smartfrog-junit: Junit 3.8.2 test execution smartfrog-logging: Logging through Log4J and commons-logging smartfrog-networking: SSH, SCP, FTP and email smartfrog-quartz: Scheduled operations smartfrog-scripting: Scripted components smartfrog-www: Web support: Deployment and liveness pages smartfrog-xml: XML Support smartfrog-xmpp: XMPP/Jabber communications smartfrog-xunit: Distributed testing and reporting All the JAR files are also published to a repository that is compatible with Apache Maven and Ivy. Add http://smartfrog.sourceforge.net/repository/ to your repository list to pull SmartFrog artifacts into your Ivy- or Maven- based build. There are also SmartFrog components to retrieve artifacts from such a repository (the Library components under /org/smartfrog/services/os/java/library.sf ), which can be used for dynamic download of SmartFrog and other artifacts. Security warning ================ Unless SmartFrog is configured with security, a running daemon will listen on its configured port for incoming deployment requests, and deploy the applications with the rights of the user running the daemon. When the smartfrog-daemon RPM is installed, that means that a process running as root will be listening on an open port for incoming deployment requests. Do not deploy SmartFrog this way on any untrusted network, not without turning security on and, ideally, recreating the RPMs with signed JAR files. Building SmartFrog ================== SmartFrog requires Java 1.5 and Ant 1.7 to build. The izpack and source .zip and .tar.gz distributions include a source tree adequate to build the entire system. To build a later release, please follow the instructions at http://sourceforge.net/svn/?group_id=87384 to check out smartfrog/trunk/core from our repository. This release was built with revision 5963 of the repository, which is available under the SVN branch https://smartfrog.svn.sourceforge.net/svnroot/smartfrog/tags/release3.12.022 We strongly encourage anyone interested in building or extending SmartFrog to get involved in the SmartFrog developer mailing list, which can be found from the sourceforge project page http://sourceforge.net/projects/smartfrog/ Reporting Bugs ============== Please file all bug reports at http://jira.smartfrog.org/ Thank you! The SmartFrog Team http://smartfrog.org/ Changes since last release ========================== The main areas that SmartFrog has been improved in the last fortnight are: -Scripting: some bug fixes in shell script support. -Jetty: some bugs found and fixed in Jetty support -Web pages: the LivenessPage now does regular expression matching of the returned text, adding each matched group as a numbered attribute. Interesting uses for this are left as an exercise, other than the DynDns components we are currently adding. -SSH: Much better logging and fault handling, SSH commands and SCP copies are asynchronous, and a new BulkScpUpload component can upload a regular-expression-described set of files to a remote site. There have been changes to the testing framework and Xmpp components, but they are feature-incomplete. Release Notes - SmartFrog - Version 3.12.022 ** Bug * [SFOS-175] - There are no SSH tests that actually test for a successful SSH connection * [SFOS-613] - rpms are being created with a bad ${rpm.docs.path} property * [SFOS-614] - merge filename filter with filesets * [SFOS-615] - SSH errors aren't that helpful * [SFOS-620] - smartfrog dist/publish tasks don't include dist/private/** when it contains a new CA * [SFOS-633] - When the machine name is resolved into an IP address. In this case the component does not detect the end of the script commands. * [SFOS-635] - SshExec operations should be asynchronous * [SFOS-637] - When the "executed" application fails to start the component can potentially spin forever. * [SFOS-651] - TestCompoundImpl prints out wrong message * [SFOS-655] - changes.txt is obsolete and no longer needed. Pull. * [SFOS-656] - ant install target does not copy Ivy dependencies * [SFOS-657] - bulkscp fails if it encounters a directory * [SFOS-660] - EventCompoundImpl.deployChildCD ignores its "required" parameter * [SFOS-663] - Null pointer exception when deploying a jetty server, a listener, a servlet context and a servlet. * [SFOS-664] - java.lang.IllegalArgumentException when deploying a jetty server + listener + context + servlet * [SFOS-666] - "wrong class found for attribute" error in listener, context and servlet. * [SFOS-671] - Xmpp Roster must default to 'do not accept friend requests' * [SFOS-675] - fileImpl has mustRead and mustWrite attributes inverted * [SFOS-678] - Change to Quartz have broken the build ** Improvement * [SFOS-618] - SSH components need a single central (and better) fault extraction from JSch exceptions * [SFOS-634] - remove failonerror attribute from Ssh components * [SFOS-636] - sshexec logFile should take a File component as well as a path string * [SFOS-641] - add ability to specify connect timeout for an ssh connection with the connectTimeout attribute * [SFOS-642] - Hook jsch logging up to SmartFrog logger * [SFOS-652] - FileStore components to move to Java5 templates * [SFOS-659] - make TestRunnerImpl a subclass of EventCompoundImpl ** New Feature * [SFOS-603] - Design and implement components that represent sets of files or other resources * [SFOS-619] - core/release to include signed JARs of all the components * [SFOS-640] - Add methods to SmartFrogThread to request (politely) the termination of the threads * [SFOS-647] - Provide an OutputStream that relays its output to a log * [SFOS-658] - log number of files and bytes uploaded in SCP * [SFOS-668] - implement regexp validation of remote web pages fetched with the LivenessPage component ** Task * [SFOS-95] - Fix up security build targets * [SFOS-453] - Add functional tests for SSH that run under VMs * [SFOS-623] - review project-template.pom files and ensure that they are in sync with the ivy.xml files * [SFOS-632] - Locate all uses of new TerminationRecord and review them ** Sub-task * [SFOS-411] - When creating a copy of a master image create a new UUID * [SFOS-412] - Delete virtual HDD's, too, when deleting a vm copy. * [SFOS-626] - NPE in restlet mime type checking * [SFOS-629] - Restlet Httpclient has no proxy awareness * [SFOS-645] - empty %post section in .spec file |
From: Guijarro, J. <jul...@hp...> - 2008-02-13 10:24:47
|
Hi Ismael, sfPing starts once the component is deployed and therefore it will check if the component is alive during sfDeploy and sfStart as well. This means that the components should not block on the sfDeploy and sfStart methods. The general rule is that any long operation (ex. starting a DB) started inside those methods should be processed in a separate thread to avoid liveness failures. To test if this is your case you can make liveness 0 (liveness = disabled) org.smartfrog.sfcore.processcompound.sfLivenessDelay=0 and see if your example now works. If this solves your problem please file a bug against the Jetty component and we will fix it for next release. Regards, Julio Guijarro -----Original Message----- From: sma...@li... [mailto:sma...@li...] On Behalf Of Ismael Juma Sent: 13 February 2008 01:08 To: sma...@li... Subject: [Smartfrog-support] Termination due to slow start-up. Hi, I've been experimenting with SmartFrog for deployment of Jetty in multiple nodes. I have made good progress, but I found one issue that I am unsure how to deal with. Basically the web application is a bit slow to start causing it to be terminated as soon as it finishes the start-up process. I added "org.smartfrog.sfcore.processcompound.sfProcessTimeout=300" to default.ini, but the problem still happens (even though start-up takes less than that for sure). I can also confirm that if I disable the slow operation (takes between 40 to 70 seconds), everything works as expected. After investigating a stacktrace[1] produced with tracing enabled, I found that JettyToSFLifecycle#sfPing will cause a liveness failure if the lifecycle is not yet running. Is it possible that a liveness test is causing a failure because start-up has not finished even though the sfProcessTimeout has not yet expired? If so, is this by design? My expectation was that liveness tests could only cause a failure once the component had started since there's a specific test (with different time-out settings) for start-up. Did I misunderstand how it's supposed to work? If so, I would appreciate suggestions on how to deal with this case. Thanks, Ismael [1] 2008/02/12 17:54:17:645 GMT [TRACE][LivenessSender_HOST desktop.config:rootProcess:*unknown*] HOST desktop.config:testProcess:testProcess - HOST desktop.config:testProcess:testProcess Termination Record: HOST desktop.config:testProcess:testProcess:webApp, type: abnormal, description: Liveness Send Failure in HOST desktop.config:testProcess:testProcess when calling HOST desktop.config:testProcess:testProcess: webApp (Failure: Not started), cause: SmartFrogLivenessException:: Not started, SmartFrog 3.12.018 (2008-01-21 12:47:17 GMT) <SmartFrogLivenessException:: Not started SmartFrog 3.12.018 (2008-01-21 12:47:17 GMT)> SmartFrogLivenessException:: Not started, SmartFrog 3.12.018 (2008-01-21 12:47:17 GMT) at org.smartfrog.services.jetty.contexts.delegates.DelegateApplicationContext.ping(DelegateApplicationContext.java:149) at org.smartfrog.services.www.context.ApplicationServerContextImpl.ping(ApplicationServerContextImpl.java:244) at org.smartfrog.services.www.context.ApplicationServerContextImpl.sfPing(ApplicationServerContextImpl.java:213) at org.smartfrog.sfcore.compound.CompoundImpl.sfPingChild(CompoundImpl.java:799) at org.smartfrog.sfcore.compound.CompoundImpl.sfPingChildAndTerminateOnFailure(CompoundImpl.java:782) at org.smartfrog.sfcore.compound.CompoundImpl.sfPingChildren(CompoundImpl.java:769) at org.smartfrog.sfcore.compound.CompoundImpl.sfPing(CompoundImpl.java:755) at org.smartfrog.sfcore.prim.LivenessSender.timerTick(LivenessSender.java:61) at org.smartfrog.sfcore.common.Timer.doTick(Timer.java:155) at org.smartfrog.sfcore.common.Timer.run(Timer.java:187) at java.lang.Thread.run(Thread.java:619) ------------------------------------------------------------------------- This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ _______________________________________________ Smartfrog-support mailing list Sma...@li... https://lists.sourceforge.net/lists/listinfo/smartfrog-support |
From: Steve L. <ste...@hp...> - 2008-01-28 13:05:17
|
Guijarro, Julio wrote: > We don't have any command line tool to do that but it would not be difficult to add one. > > You could have a component that registers itself with the Termination Hooks and then either sends you a notification or terminates itself (and, and for example, you block on it until it terminates). > > The termination hook sends information about the name of the component that has terminated and its termination record. The test compounds (TestBlock, TestCompound) can send out events to listening processes, they notify when they start and stop, and when tests underneath finish. This is how I bridge from junit to test runs (look at org.smartfrog.test.DeployingTestBase in testharness/src) I was actually thinking of doing some work on those to superclass the test events with a generic 'build event', so that I can have other 'builders' that do work in a build workflow can feed events to a general listener. Ideally with some percentage done/estimate time remaining data, though that's really hard to get right. With something like that formalised, it'd be easy enough to have an ant task and command line script to block until something reaches a desired state, terminates or a timeout kicks in. -steve |
From: Steve L. <ste...@hp...> - 2008-01-28 11:01:38
|
I'm pleased to announce that SmartFrog 3.12.018 is out the door, text release notes follow and the HTML version (with clickable links) is attached). This release announcement also includes all the changes that went in to the 3.12.016 release, which came out just before the year end and which wasn't announced as it was just an interim release for heavy SF users, put out to keep us strict about releasing often. This new release is stable and working in production environments. -Steve Loughran HP Laboratories SmartFrog 3.12.018 ====================== This is a new release of SmartFrog, the Java-based, LPGL-licensed distributed deployment framework developed by HP Laboratories. SmartFrog enables applications to be deployed across multiple machines, configuring different aspects of the system so that they are all consistently configured, and managing the life-cycle of the application as a whole. The project's home page is http://smartfrog.org/ The release artifacts are available at http://sourceforge.net/project/showfiles.php?group_id=87384&package_id=176308 This release is 3.12.018; built from revision 5781 of the SVN repository. This release has an extended language with the ability to tag attributes, and includes the following items: * Core smartfrog daemon, including services to manage files, start and stop Java and native programs. * Example components and applications. * Ant support: ant tasks to deploy and terminate applications from a build. * Ant components: the ability to execute ant tasks in a deployment. * Anubis: a partition aware tuple-space that can be used to implement fault tolerant systems. * Database: components to issue database commands, and deploy HSLDB and MySQL. * JMX: the ability to configure and manage JMX components, and to manage SmartFrog components over JMX. * Logging: integration with Apache commons-logging and Log4J * Networking: email, FTP, SSH, DNS support. * Quartz: scheduled operations using Quartz libraries. * Scripting: support for BSF-hosted scripting languages * Testing: Distributed JUnit and component testing with SFUnit. * WWW: deployment of WAR and EAR files to application servers. deploy-by-copy is provided for all application servers that support it, and sample templates are provided to start and stop Tomcat and JBoss. The Jetty component can configure and deploy individual servlets, eliminating much of the need for WAR files and application servers. * XML: XML support with XOM. * XMPP: Presence and messaging over Jabber. Packaging ========= This release is available as: * RPM files inside a .tar.gz file. * a JAR installer. * the original core smartfrog distribution as .zip and .tar.gz (deprecated) The RPM installation is for RPM-based Linux systems. The archive contains the following RPM files: smartfrog: the core SmartFrog distribution. smartfrog-daemon: the shell scripts to add the smartfrog distribution to the path, and to run the daemon on start-up. smartfrog-demo: example code and documentation. smartfrog-javadocs: javadocs for the project smartfrog-ant: Ant task and build file execution smartfrog-anubis: Distributed partition-aware tuple space smartfrog-database: Database access smartfrog-jmx: JMX integration though MX4J smartfrog-junit: Junit 3.8.2 test execution smartfrog-logging: Logging through Log4J and commons-logging smartfrog-networking: SSH, SCP, FTP and email smartfrog-quartz: Scheduled operations smartfrog-scripting: Scripted components smartfrog-www: Web support: Deployment and liveness pages smartfrog-xml: XML Support smartfrog-xmpp: XMPP/Jabber communications smartfrog-xunit: Distributed testing and reporting All the JAR files are also published to a repository that is compatible with Apache Maven and Ivy. Add http://smartfrog.sourceforge.net/repository/ to your repository list to pull SmartFrog artifacts into your Ivy- or Maven- based build. There are also SmartFrog components to retrieve artifacts from such a repository (the Library components under /org/smartfrog/services/os/java/library.sf ), which can be used for dynamic download of SmartFrog and other artifacts. Security warning ================ Unless SmartFrog is configured with security, a running daemon will listen on its configured port for incoming deployment requests, and deploy the applications with the rights of the user running the daemon. When the smartfrog-daemon RPM is installed, that means that a process running as root will be listening on an open port for incoming deployment requests. Do not deploy SmartFrog this way on any untrusted network, not without turning security on and, ideally, recreating the RPMs with signed JAR files. Building SmartFrog ================== SmartFrog requires Java 1.5 and Ant 1.7 to build. The izpack and source .zip and .tar.gz distributions include a source tree adequate to build the entire system. To build a later release, please follow the instructions at http://sourceforge.net/svn/?group_id=87384 to check out smartfrog/trunk/core from our repository. This release was built with revision 5781 of the repository, which is available under the SVN branch https://smartfrog.svn.sourceforge.net/svnroot/smartfrog/tags/release3.12.018 We strongly encourage anyone interested in building or extending SmartFrog toget on the mailing list, which can be found under the sourceforge project page: http://sourceforge.net/projects/smartfrog/ Reporting Bugs ============== Please file all bug reports at http://jira.smartfrog.org/ Thank you! The SmartFrog Team http://smartfrog.org/ Changes since last release ========================== There was a release in late december, 3.12.016. Although published to the central repository, it was considered an interim release before the next stable version, namely 3.12.018. We have included the 3.12.016 release notes alongside those of 3.12.018, as this release is the first time that the 3.12.016 changes are likely to be picked up. As well as ongoing improvements in test handling and execution, there is now support for locally-published Ivy artifacts from the library components, components that can download versioned JAR files from private or public repositories as part of a deployment. The SSH/SCP components have been reworked to extract password information from password providers; a number of such password providers have been implemented for use by all components that accept passwords. Release Notes - SmartFrog - Version 3.12.018 ** Bug * [SFOS-152] - Cargo tests are failing, probably we are out of sync with cargo * [SFOS-482] - Jetty hangs if the number of socket acceptors is so great there are no threads left to handle the work * [SFOS-586] - string OOBE in netbeans 6 plugin * [SFOS-587] - sax parse exception parsing the smartfrog grammar in netbeans * [SFOS-596] - rpm .tar.gz bundle is created without .tar in the filename * [SFOS-599] - IvyLocalCachePolicy doesn't look for cached artifacts in the right place * [SFOS-606] - Test compound messages could be improved * [SFOS-607] - ant-launcher is actually needed by ant.jar * [SFOS-610] - the original classname of ExtractedExceptions should be compared in tests ** Improvement * [SFOS-602] - Move library unit tests into their own package ** New Feature * [SFOS-598] - add jarVersion attribute to Version * [SFOS-600] - Add a NoRemoteAccessPolicy that can be used to declare that remote access is not supported ** Task * [SFOS-138] - Move cargo components up to Cargo 0.9 * [SFOS-145] - Add ivy policy to retrieve from locally published artifacts * [SFOS-533] - add exception list support to TestCompoundImpl * [SFOS-592] - Deployment Reporting for Avalanche. Release Notes - SmartFrog - Version 3.12.016 ** Bug * [SFOS-581] - test descriptions are not preserved in TestLifecycleEvents * [SFOS-585] - ClassNotFoundException in netbeans plugin in netbeans 6.0 * [SFOS-593] - Multihost mode for Actions or Scripts, fail when using sfPing and component is in second host while first host is also available. * [SFOS-594] - properties matching the test.* pattern are not set in the test daemon ** Improvement * [SFOS-262] - SSH Component: Brining common stuff for passwd and public key authentication to a super class. * [SFOS-580] - improve netbeans support through relative import paths * [SFOS-583] - add ant's optional JARs to components/ant/ivy.xml ** Task * [SFOS-579] - review the various warnings the IDE is giving about xml components ----------------------- Hewlett-Packard Limited Registered Office: Cain Road, Bracknell, Berks RG12 1HN Registered No: 690597 England |
From: Guijarro, J. <jul...@hp...> - 2008-01-23 14:06:39
|
We don't have any command line tool to do that but it would not be difficul= t to add one. You could have a component that registers itself with the Termination Hooks= and then either sends you a notification or terminates itself (and, and fo= r example, you block on it until it terminates). The termination hook sends information about the name of the component that= has terminated and its termination record. An example for of how to use the termination hooks is the Trace component: = org/smartfrog/services/trace/SFTrace /** Terminate tracer. */ private SfTerminateWithTracer sfTerminateWithTracer =3D new SfTerminate= WithTracer(); ... //add hook sfTerminateWithHooks.addHook(sfTerminateWithTracer); ... //remove hook sfTerminateWithHooks.removeHook(sfTerminateWithTracer); ... //Example of listening to hook events. /** * Utility inner class- terminate tracer */ private class SfTerminateWithTracer implements PrimHook { /** * sfHookAction for terminating * * @param prim prim component * @param terminationRecord TerminationRecord object * * @throws SmartFrogException in case of any error */ public void sfHookAction(Prim prim, TerminationRecord terminationRe= cord) throws SmartFrogException { Date date =3D new Date(System.currentTimeMillis()); try { prim.sfReplaceAttribute("sfTracesfTraceTerminateLifeCycle",= date); } catch (RemoteException rex){ printMsg(rex.toString(),null); } printMsgTerminate(getDN(prim), terminationRecord.errorType, ter= minationRecord.toString(), date); } } Regards, Julio -----Original Message----- From: sma...@li... [mailto:smartfrog-sup= por...@li...] On Behalf Of Dominik Pospisil Sent: 23 January 2008 13:49 To: sma...@li... Subject: [Smartfrog-support] Determinig deployment termination status Hello, is there a way how to determine deployment termination status outside of SF environment? I want to achaive something like this: 1) call sfStart shell script 2) block until component terminates 3) check component TerminationRecord (normal/abnormal) Currently, I am periodically checking if component is still alive using sfDiagnostics. But I am not able to catch component termination status. I c= an easily do this by adding some notification code to sf components. But I really want to implement this mechanism generally, working for all components. Thanks a lot, - Dominik ------------------------------------------------------------------------- This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ _______________________________________________ Smartfrog-support mailing list Sma...@li... https://lists.sourceforge.net/lists/listinfo/smartfrog-support |
From: Steve L. <ste...@hp...> - 2008-01-10 10:47:52
|
Bill de hOra wrote: > Hi Steve, Hi, happy new year etc. > > Tbh, the more I think about it*, outside the WAR/exploded and add that > to the classpath, but to start with into the exploded (WEB-INF/classes) > would do; I can worry about how to deal with lost updates (when the WAR > is redeployed) later. hmm. The way I like to do these configurations is start up the app server with the specific properties I choose; you deploy the WAR file as is, but set up tomcat or JBoss with the properties for the jdbc binding and such like. You set those properties up from SmartFrog or Ant, and all is well -provided that is how you start the application. I've played in the past with machines picking up machine-specific property files based on their hostname, letting us have one WAR for different clusters. This worked until operations decided to give the staging machines the same hostname as production. > >> 2. what kind of CMDB are you thinking of? > > A custom webapp built on top of a vcs (eg subversion/mercurial); CMDBs > I've looked at seem to do everything except configuration management. If it supports GET, you can get the properties file somehow. > Either way I'd like to split publishing of these files from their > management. So in theory I can start with just some files in a folder > and push those. > > cheers > Bill > > * Since writing the first post, I'm starting to conclude that war/ear > model is just busted. But I figure the Java world must have solved this > problem at some point. It is broken. The whole vision of 'assemblers' building WAR/EAR files from reusable EJB beans bought off the shelf turned out to be wrong. And while they had all these various roles for people 'developer', 'assembler', 'administrator' they left out tester and 'operations team'. What you can't really pull off is a WAR file that targets >1 installation, yet contains all the configuration information. Not in the web.xml file, log4j.properties, etc. The whole idea of a self contained WAR/EAR file fails when you -want to work with multiple installations -need to add a JDBC driver to the app server itself. This is why in Ant in Action I defined a EAR file to be 'an installation specific bundling of an application'. Once you accept that, you can have a build that creates custom EAR files for different machines. The alternative is to have a single WAR/EAR file and do the configuration *outside* the war, through -JVM properties -LDAP configuration -name-value pairs in the database (with some way of getting to the database preconfigured) -hostnames (e.g edit DNS or /etc/hosts with the specific addresses of 'database' and 'filestore'); your code assumes they are set up right somehow -- ----------------------- Hewlett-Packard Limited Registered Office: Cain Road, Bracknell, Berks RG12 1HN Registered No: 690597 England |
From: Bill de h. <bi...@de...> - 2008-01-09 22:06:16
|
Steve Loughran wrote: > Bill de hOra wrote: >> Hi, >> >> I have a scenario for a small tomcat cluster; a setup where properties >> files are placed under version control, can get pushed out separately >> from code drops and can be changed separately from code drops >> (properties are managed in a cmdb, code in a vcs). There are multiple >> webapps, each with their own config (eg 4 app servers, 5 gateways, 3 >> seach nodes, ...) Fwiw, this setup isn't already using smartfrog to >> deploy the webapps. >> >> I'm wondering if this is a good problem for smartfrog to solve, or if >> anyone here is doing something like this. I have fair idea what it can >> do for configurations of components but not for configuration inside >> components. >> >> I've been thinking about some alternate approaches, ranging from plain >> old scp, jmx, agents updating from central version control, push from >> distributed version control. All of which have specific problems, with >> two notable common ones; a) it'll end being a dedicated subsystem >> that only does properties config and yet another thing to manage, b) >> you have to think about what happens when the webapps are upgraded and >> possibly overwrite the properties files. >> >> Thoughts? >> >> cheers >> Bill >> > > Bill, > > I'm just back from my travels and am too jet-lagged to come up with a > decent answer right now. give me a couple of days and I'll get some > suggestions in as to how best to do this. > > 1. Where do you want the properties files? in the WAR/expanded WAR > itself, or somewhere else in the filesystem? Hi Steve, Tbh, the more I think about it*, outside the WAR/exploded and add that to the classpath, but to start with into the exploded (WEB-INF/classes) would do; I can worry about how to deal with lost updates (when the WAR is redeployed) later. > 2. what kind of CMDB are you thinking of? A custom webapp built on top of a vcs (eg subversion/mercurial); CMDBs I've looked at seem to do everything except configuration management. Either way I'd like to split publishing of these files from their management. So in theory I can start with just some files in a folder and push those. cheers Bill * Since writing the first post, I'm starting to conclude that war/ear model is just busted. But I figure the Java world must have solved this problem at some point. |
From: Steve L. <ste...@hp...> - 2008-01-09 14:13:52
|
Bill de hOra wrote: > Hi, > > I have a scenario for a small tomcat cluster; a setup where properties > files are placed under version control, can get pushed out separately > from code drops and can be changed separately from code drops > (properties are managed in a cmdb, code in a vcs). There are multiple > webapps, each with their own config (eg 4 app servers, 5 gateways, 3 > seach nodes, ...) Fwiw, this setup isn't already using smartfrog to > deploy the webapps. > > I'm wondering if this is a good problem for smartfrog to solve, or if > anyone here is doing something like this. I have fair idea what it can > do for configurations of components but not for configuration inside > components. > > I've been thinking about some alternate approaches, ranging from plain > old scp, jmx, agents updating from central version control, push from > distributed version control. All of which have specific problems, with > two notable common ones; a) it'll end being a dedicated subsystem that > only does properties config and yet another thing to manage, b) you have > to think about what happens when the webapps are upgraded and possibly > overwrite the properties files. > > Thoughts? > > cheers > Bill > Bill, I'm just back from my travels and am too jet-lagged to come up with a decent answer right now. give me a couple of days and I'll get some suggestions in as to how best to do this. 1. Where do you want the properties files? in the WAR/expanded WAR itself, or somewhere else in the filesystem? 2. what kind of CMDB are you thinking of? -steve -- ----------------------- Hewlett-Packard Limited Registered Office: Cain Road, Bracknell, Berks RG12 1HN Registered No: 690597 England |
From: Bill de h. <bi...@de...> - 2008-01-07 19:26:47
|
Hi, I have a scenario for a small tomcat cluster; a setup where properties files are placed under version control, can get pushed out separately from code drops and can be changed separately from code drops (properties are managed in a cmdb, code in a vcs). There are multiple webapps, each with their own config (eg 4 app servers, 5 gateways, 3 seach nodes, ...) Fwiw, this setup isn't already using smartfrog to deploy the webapps. I'm wondering if this is a good problem for smartfrog to solve, or if anyone here is doing something like this. I have fair idea what it can do for configurations of components but not for configuration inside components. I've been thinking about some alternate approaches, ranging from plain old scp, jmx, agents updating from central version control, push from distributed version control. All of which have specific problems, with two notable common ones; a) it'll end being a dedicated subsystem that only does properties config and yet another thing to manage, b) you have to think about what happens when the webapps are upgraded and possibly overwrite the properties files. Thoughts? cheers Bill |
From: Guijarro, J. <jul...@hp...> - 2007-12-10 15:57:52
|
HI Qian, I think your solution should work. My suggestion or improvement would be to use multicast to propagate the cha= nges directly to all the management nodes and instead of writing it to a fi= le sending the file why not to send the .sf file directly to all the manage= ment nodes. Another thing that you could do is to use the console to send the new .sf f= ile to all the nodes (you could do this in text form of in ComponentDescrip= tion form if the recipients are sf Components). The master node would act o= n the new configuration and the other nodes would just store the configurat= ion locally. In this way each management node would have a cached copy of t= he cluster configuration. Another thing that you will need to add is how to recover a management node= from a failure, for example should a recovered node get a full copy of the= entire configuration from the master node or peer or should it try to re-s= ynch it config data. The first option is probably easier. This is quite easy to do with Anubis because its protocol guaranties that w= hat you received is the same that the other nodes in your "partition" see a= nd it simplifies what you have to do to avoid errors when synchronizing you= r configuration data in the cluster but as I said, it could work equally we= ll with your own protocol or with some other form of multicast and extra pr= ogramming. Regards, Julio Guijarro -----Original Message----- From: Zhang Qian [mailto:zhq...@gm...] Sent: 09 December 2007 02:15 To: Guijarro, Julio Cc: Steve Loughran; smartfrog-developer; sma...@li...urceforge.= net Subject: Re: [Smartfrog-developer] Questions about SmartFrog Hi Julio, The configuration data of my cluster are small sets of attribute value pairs, not lots of data. The data amount is not large, but we really need the reliability. Usually, I make config change in the management console of my cluster, then this console will communicate with the daemon in the master node, and send the config change to it. The daemon will activate the change and write them in the config file stored in the NFS. Then other management nodes will also see the changes. But obviously, NFS coulde be a single-point failure of my cluster. Now I am trying to change this flow. The config change I make in the management console will be saved as a .sf file, then I will run my own SmartFrog component which extends some SmartFrog inbuilt services. This component will get the config change by parsing the .sf file and send the change to the daemon in master node. The daemon will activate this change, then the component I mentioned before will write the change to the local file, and propagate this file to all the manangement nodes. Any suggestions about this approach?:-) Thanks=1B$B!*=1B(B Regards, Qian |
From: Guijarro, J. <jul...@hp...> - 2007-12-07 10:55:21
|
Hi Qian, What kind of configuration data are you talking about? Is it lots of data o= r small sets of attribute value pairs? One way that we have used Anubis is to propagate changes in the configurati= on data so that all the "master nodes" can see those changes and cache the = changes locally. This is simple to do with Anubis because of the guaranties= and consistency of the Anubis notifications. You could probably do somethi= ng similar using your notification mechanism. Then we have, as you mentioned, components to operate with the file system = and/or with ftp/ssh/... that could be extended to meet your needs. One inte= resting component could be a wrapper for rsync but this won't help you that= much in n+1 configurations. Other possibilities are: use simple multicast to announce changes in your c= onfiguration data or use RSS feeds to propagate these changes. The right solution will depends exactly on your architecture and type/amoun= t of data to synchronize. Regards, Julio Guijarro -----Original Message----- From: sma...@li... [mailto:smartfrog-d= eve...@li...] On Behalf Of Zhang Qian Sent: 07 December 2007 06:06 To: Steve Loughran Cc: smartfrog-developer Subject: Re: [Smartfrog-developer] Questions about SmartFrog > I see. How does the management console deal with failure of the master? > Does it discover it using some discovery protocol, or is the active > master expected to update a dynamic DNS entry? Yes, we deal with this issue by DNS way. Today I took a look at Anubis document. As my understanding, It seems Anubis is a notification service and provides a detection mechanism for distributed system. But in my cluster, we have already had this kind of mechanism for detecting the status of our key daemons, dealing with master failure, etc. We don't want to change that, just want to remove the shared-file system dependency. Anubis looks a little big for this request. As I know, SmartFrog has shipped some inbuild services for file operation, downloading in its package. I am wondering it is possible to fulfill my request by writing a SmartFrog which just extends these inbuilt service. Thanks, Qian ------------------------------------------------------------------------- SF.Net email is sponsored by: Check out the new SourceForge.net Marketplace. It's the best place to buy or sell services for just about anything Open Source. http://sourceforge.net/services/buy/index.php _______________________________________________ Smartfrog-developer mailing list Sma...@li... https://lists.sourceforge.net/lists/listinfo/smartfrog-developer -----Original Message----- From: sma...@li... [mailto:smartfrog-d= eve...@li...] On Behalf Of Steve Loughran Sent: 06 December 2007 13:15 Cc: smartfrog-developer Subject: Re: [Smartfrog-developer] Questions about SmartFrog Zhang Qian wrote: > Hi All, > > Thanks for your replies about this topic. > > I'd like to share more details about my cluster with you. > As you know, it's a cluster includes hundreds of nodes. We divide > these nodes into > two categories: management nodes and computing nodes. I see. We've tend to prefer the tactic of letting any node become a master (with agreement), because stops you having to decide which machines are in charge. Whatever boots up first can take over. > For computing nodes, they just run the task arranged to them, do not > have management roles, so we don't care it in this case. OK -the workers are expected to fail and are told what to do; if they go away then something else gets the job. > > For management nodes, we have a dozen of this kind of nodes in the > cluster. Only > one of them is the master node whose responsibility is to manage the > entirely cluster, others are just the master candidates. The reason we > do it in this way is to avoid single point failure, once the master > node fails, a master candidate will take over its job, and become the > new master node. So we have the heartbeat mechanism to detect the node > status to realize fail-over. OK. You're giving one machine charge of the resource management problem, but by sharing the data amongst candiates, if the master goes away you can have an election of some sort to decide who is the new master. > Now there is a limitation: our cluster relies on shared-file system(such = as NFS) > which can be accessed by all the management nodes.That means all the conf= ig > files placed on the shared-file system, all the management nodes need the= se > config files. It's the master node's responsibilityto update these config= file > according to user's request, after a fail-over, the new master node > will read these > config file to know the latest configuration. ah, so 1. the NFS filestore is a failure point 2. you need to save the configuration to a filesystem that doesnt go out of its way to enable locking > > Now we want to remove the shared-file system dependency, each management > node has config files in its local file system. So obviously, we need > a mechanism > to synchronize these config files on all the management nodes. That's > why I asked > that questions. > I don't know whether there is a inbuilt component or service can > provide this kind of mechanism in SmartFrog. Certainly I will > investigate Anubis first, thanks for your sharing. This is what anubis is designed for, to make a cluster out of a set of machines on a LAN. The papers and Paul can provide more details. > In addition, we have had a management console for user which will > communicate with our daemon in the master node, and deliver config > change to that daemon. > After receive the config change, this daemon will verify and activate > the change first, > then write it into the config file placed on the shared-file system. I see. How does the management console deal with failure of the master? Does it discover it using some discovery protocol, or is the active master expected to update a dynamic DNS entry? ------------------------------------------------------------------------- SF.Net email is sponsored by: The Future of Linux Business White Paper from Novell. From the desktop to the data center, Linux is going mainstream. Let it simplify your IT future. http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4 _______________________________________________ Smartfrog-developer mailing list Sma...@li... https://lists.sourceforge.net/lists/listinfo/smartfrog-developer -----Original Message----- From: sma...@li... [mailto:smartfrog-d= eve...@li...] On Behalf Of Zhang Qian Sent: 06 December 2007 03:31 To: Steve Loughran Cc: smartfrog-developer Subject: Re: [Smartfrog-developer] Questions about SmartFrog In addition, we have had a management console for user which will communicate with our daemon in the master node, and deliver config change to that daemon. After receive the config change, this daemon will verify and activate the change first, then write it into the config file placed on the shared-file system. This is what we are doing, but we want to remove shared-file system depende= ncy. Thanks and Regards, Qian ------------------------------------------------------------------------- SF.Net email is sponsored by: The Future of Linux Business White Paper from Novell. From the desktop to the data center, Linux is going mainstream. Let it simplify your IT future. http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4 _______________________________________________ Smartfrog-developer mailing list Sma...@li... https://lists.sourceforge.net/lists/listinfo/smartfrog-developer -----Original Message----- From: sma...@li... [mailto:smartfrog-d= eve...@li...] On Behalf Of Zhang Qian Sent: 06 December 2007 02:58 To: Steve Loughran Cc: smartfrog-developer Subject: Re: [Smartfrog-developer] Questions about SmartFrog Hi All, Thanks for your replies about this topic. I'd like to share more details about my cluster with you. As you know, it's a cluster includes hundreds of nodes. We divide these nodes into two categories: management nodes and computing nodes. For computing nodes, they just run the task arranged to them, do not have management roles, so we don't care it in this case. For management nodes, we have a dozen of this kind of nodes in the cluster. Only one of them is the master node whose responsibility is to manage the entirely cluster, others are just the master candidates. The reason we do it in this way is to avoid single point failure, once the master node fails, a master candidate will take over its job, and become the new master node. So we have the heartbeat mechanism to detect the node status to realize fail-over. Now there is a limitation: our cluster relies on shared-file system(such as= NFS) which can be accessed by all the management nodes.That means all the config files placed on the shared-file system, all the management nodes need these config files. It's the master node's responsibilityto update these config f= ile according to user's request, after a fail-over, the new master node will read these config file to know the latest configuration. Now we want to remove the shared-file system dependency, each management node has config files in its local file system. So obviously, we need a mechanism to synchronize these config files on all the management nodes. That's why I asked that questions. I don't know whether there is a inbuilt component or service can provide this kind of mechanism in SmartFrog. Certainly I will investigate Anubis first, thanks for your sharing. Regards, Qian ------------------------------------------------------------------------- SF.Net email is sponsored by: The Future of Linux Business White Paper from Novell. From the desktop to the data center, Linux is going mainstream. Let it simplify your IT future. http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4 _______________________________________________ Smartfrog-developer mailing list Sma...@li... https://lists.sourceforge.net/lists/listinfo/smartfrog-developer |
From: Steve L. <ste...@hp...> - 2007-12-05 18:46:04
|
I'm pleased to announce the release of SmartFrog 3.12.014; the artifacts are up under https://sourceforge.net/project/showfiles.php?group_id=87384&package_id=108447&release_id=559511 This is probably going to be the release for 2007, unless we have an urge to do one just before christmas to keep the release schedule up. There's no major changes, just bug fixes, better documentation for some things, and a javadocs RPM. Javadocs go into /usr/share/javadocs/smartfrog-${version-number} ; there may be some changes under there in future releases. One thing we have done is tightened the security of the RPM-installed SmartFrog so that the logs directory is now only writeable by SmartFrog. This would stop anyone from running SmartFrog unless they had the right privileges, so the log to file code is enhanced to fall back gracefully if the configured log directory is not writeable -it switches to java.io.tmpdir in this situation. Note that locking down the log directories does not make your system secure if your daemon is accepting incoming calls from unauthenticated callers; it merely hides the problems. As usual, please don't hesitate to provide feedback, bug reports, test cases, improvements to the documentation, etc. Early roadmap of pending changes ==================== I'm going to mention some imminent enhancements for anyone who wants to check out the repository and get involved -migration to OSGi. This is a merge in of a branch that is already in the repository; we will switch to OSGi to manage classpaths. This will improve classloading, isolation of classes, and other things, so we are looking forward to this change. -There is a restlet client component being put together as part of some amazon S3/EC2 support. The client will be general purpose and let you declare which HTTP verbs to apply during startup, pings and termination -you can issue a list if you want for each action. I want to add some XPath extraction of result text too. Query parameters and headers can be set by a list. The result will be a component that can manage the lifecycle of a remote resource (PUT on start, GET on ping, DELETE on termination), or issue complex HTTP operations against remote resources, and turn the result into attributes that can be picked up by other components. Participation in features and functionality of this code is encouraged, especially from anyone who is deploying in the S3/EC2 farm. -steve SmartFrog 3.12.014 ====================== This is a new release of SmartFrog, the Java-based, LPGL-licensed distributed deployment framework developed by HP Laboratories. SmartFrog enables applications to be deployed across multiple machines, configuring different aspects of the system so that they are all consistently configured, and managing the life-cycle of the application as a whole. The project's home page is http://smartfrog.org/ The release artifacts are available at http://sourceforge.net/project/showfiles.php?group_id=87384&package_id=176308 This release is 3.12.014; built from revision 5649 of the SVN repository. This release has an extended language with the ability to tag attributes, and includes the following items: * Core smartfrog daemon, including services to manage files, start and stop Java and native programs. * Example components and applications. * Ant support: ant tasks to deploy and terminate applications from a build. * Ant components: the ability to execute ant tasks in a deployment. * Anubis: a partition aware tuple-space that can be used to implement fault tolerant systems. * Database: components to issue database commands, and deploy HSLDB and MySQL. * JMX: the ability to configure and manage JMX components, and to manage SmartFrog components over JMX. * Logging: integration with Apache commons-logging and Log4J * Networking: email, FTP, SSH, DNS support. * Quartz: scheduled operations using Quartz libraries. * Scripting: support for BSF-hosted scripting languages * Testing: Distributed JUnit and component testing with SFUnit. * WWW: deployment of WAR and EAR files to application servers. deploy-by-copy is provided for all application servers that support it, and sample templates are provided to start and stop Tomcat and JBoss. The Jetty component can configure and deploy individual servlets, eliminating much of the need for WAR files and application servers. * XML: XML support with XOM. * XMPP: Presence and messaging over Jabber. Packaging ========= This release is available as: * RPM files inside a .tar.gz file. * a JAR installer. * the original core smartfrog distribution as .zip and .tar.gz (deprecated) The RPM installation is for RPM-based Linux systems. It comprises three RPM files, smartfrog, smartfrog-daemon and smartfrog-demo: smartfrog: the core SmartFrog distribution. smartfrog-daemon: the shell scripts to add the smartfrog distribution to the path, and to run the daemon on start-up. smartfrog-demo: example code and documentation. All the JAR files are also published to a repository that is compatible with Apache Maven and Ivy. Add http://smartfrog.sourceforge.net/repository/ to your repository list to pull SmartFrog artifacts into your Ivy- or Maven- based build. There are also SmartFrog components to retrieve artifacts from such a repository (the Library components under /org/smartfrog/services/os/java/library.sf ), which can be used for dynamic download of SmartFrog and other artifacts. Security warning ================ Unless SmartFrog is configured with security, a running daemon will listen on its configured port for incoming deployment requests, and deploy the applications with the rights of the user running the daemon. When the smartfrog-daemon RPM is installed, that means that a process running as root will be listening on an open port for incoming deployment requests. Do not deploy SmartFrog this way on any untrusted network, not without turning security on and, ideally, recreating the RPMs with signed JAR files. Building SmartFrog ================== SmartFrog requires Java 1.5 and Ant 1.7 to build. The izpack and source .zip and .tar.gz distributions include a source tree adequate to build the entire system. To build a later release, please follow the instructions at http://sourceforge.net/svn/?group_id=87384 to check out smartfrog/trunk/core from our repository. This release was built with revision 5645 of the repository, which is available under the SVN branch https://smartfrog.svn.sourceforge.net/svnroot/smartfrog/tags/release3.12.014 We strongly encourage anyone interested in building or extending SmartFrog to get involved in the SmartFrog developer mailing list, which can be found from the sourceforge project page http://sourceforge.net/projects/smartfrog/ Reporting Bugs ============== Please file all bug reports at http://jira.smartfrog.org/ Thank you! The SmartFrog Team http://smartfrog.org/ Changes since last release ========================== There are no major changes in this release, only ongoing bug fixes and minor improvements, and build process tuning. The RPMs have been improved; the javadocs for the core JARs are provided as their own RPM. ** Bug * [SFOS-560] - -x bit is set in /etc/sysconfig/smartfrog * [SFOS-561] - sfResolveHereNonlocal does not delegate to sfResolveHere when the attribute is not in the context and the dns component overrides sfResolveHere to give a default value * [SFOS-564] - regression in RunShell; list operations causing NPW * [SFOS-572] - default for sfDump does not handle references with repeated names correctly * [SFOS-574] - jjdocs target fails on CruiseControl * [SFOS-575] - common.xml javadoc fails if there is no source ** Improvement * [SFOS-265] - bits of smartfrog arent forwarding SmartFrogResolutionExceptions consistently * [SFOS-501] - Ant component needs more tests * [SFOS-548] - Add SFNULL to schemas template * [SFOS-551] - add Xalan and JDom to the XML component * [SFOS-552] - review XML component source+build; add in xalan and JDOM * [SFOS-556] - move up to httpunit 1.6.2 for testing www componentry * [SFOS-569] - Add attribute to filter the output of positive searches. Useful to remove the "echoExit command" from the standard output . * [SFOS-573] - logToFile when failing to create a log file, it should try java temp dir before failing ** New Feature * [SFOS-559] - Create RPMs for the other packages: ant, database, jmx, xunit, junit, net, www, quartz * [SFOS-568] - Add a component to test the specific OS ** Task * [SFOS-129] - incorporate ivy published documentation into the release artifacts * [SFOS-357] - Move Jetty support up to Jetty6 * [SFOS-467] - Admin/Debug servlets are no longer in Jetty6; remove the components and their tests * [SFOS-540] - document the Ant components ** Sub-task * [SFOS-473] - Add SSL support with an SSLSocketListener * [SFOS-475] - Move realm/security config out of SFJetty and make reusable * [SFOS-476] - Remove SFJettyAdmin as the servlet is gone |
From: Steve L. <ste...@hp...> - 2007-11-26 18:56:53
|
Jeff Garratt wrote: > Now getting this issue with scripts that work when running 008: > > > _____________ > 2007/11/26 10:47:15:265 EST [WARN ][main] SFCORE_LOG - SmartFrog security is NOT active > - FAILED when trying DEPLOY of 'sceptre', [file:./resources/smartfrog/sfStartOG.sf], host:localhost > Result: > * Exception: 'SmartFrogLifecycleException:: [sfStart] HOST "9.76.153.205":rootProcess:sceptre:startObjectGrid > cause: SmartFrogDeploymentException: unnamed component. SmartFrogLifecycleException:: [sfDeploy] null, cause: java.lang.NullPointerException, SmartFrog 3.12.010 (2007-11-08 > 15:53:24), data: Failed object class: org.smartfrog.services.os.runshell.RunShellImpl, primSFCompleteName: HOST "9.76.153.205":rootProcess:sceptre:startObjectGrid:startOGCatalogSe > rver:antScript, primContext: included, reference: HOST "9.76.153.205":rootProcess:sceptre:startObjectGrid:startOGCatalogServer:antScript, primContext: included > cause: SmartFrogLifecycleException:: [sfDeploy] null > cause: java.lang.NullPointerException > SmartFrog 3.12.010 (2007-11-08 15:53:24) > data: Failed object class: org.smartfrog.services.os.runshell.RunShellImpl > primSFCompleteName: HOST "9.76.153.205":rootProcess:sceptre:startObjectGrid:startOGCatalogServer:antScript > primContext: included > reference: HOST "9.76.153.205":rootProcess:sceptre:startObjectGrid:startOGCatalogServer:antScript > primContext: included > SmartFrog 3.12.010 (2007-11-08 15:53:24) > data: Failed object class: org.smartfrog.sfcore.workflow.combinators.Sequence > primSFCompleteName: HOST "9.76.153.205":rootProcess:sceptre > primContext: included > reference: HOST "9.76.153.205":rootProcess:sceptre > > ____ > > > Thanks so much, > Jeff > Jeff, this is not good. Julio has replicated it and I've opened a bug report: http://jira.smartfrog.org/jira/browse/SFOS-564 once I've got the test done I'll get the fix in, it will go in the next release which is likely in the next fortnight. For an early fix, if you are trying to start an ant script, have you tried using the ant specific components we put in to 3.12.00? if you email me the bit of the script that you were using to start ant, I'll have a go at rewriting it for the new component. -Steve |
From: Guijarro, J. <jul...@hp...> - 2007-11-26 16:30:52
|
Hi Jeff, It seems that RunShell is broken in this release. We will fix it and make a= nother release soon. Sorry for the inconvenience but RunShell is deprecated= and we should have not changed anything on it. In any case, I would advise= you to use ShellScript which is a more advanced and properly documented ve= rsion of the RunShell component. Regards, Julio -----Original Message----- From: sma...@li... [mailto:smartfrog-users= -bo...@li...] On Behalf Of Jeff Garratt Sent: 26 November 2007 15:51 To: sma...@li... Subject: [Smartfrog-users] Issue with 3.12.010 from 008 Now getting this issue with scripts that work when running 008: _____________ 2007/11/26 10:47:15:265 EST [WARN ][main] SFCORE_LOG - SmartFrog security i= s NOT active - FAILED when trying DEPLOY of 'sceptre', [file:./resources/smartfrog/sfS= tartOG.sf], host:localhost Result: * Exception: 'SmartFrogLifecycleException:: [sfStart] HOST "9.76.153.= 205":rootProcess:sceptre:startObjectGrid cause: SmartFrogDeploymentException: unnamed component. SmartFrogLi= fecycleException:: [sfDeploy] null, cause: java.lang.NullPointerException, = SmartFrog 3.12.010 (2007-11-08 15:53:24), data: Failed object class: org.smartfrog.services.os.runshell.R= unShellImpl, primSFCompleteName: HOST "9.76.153.205":rootProcess:sceptre:st= artObjectGrid:startOGCatalogSe rver:antScript, primContext: included, reference: HOST "9.76.153.205":rootP= rocess:sceptre:startObjectGrid:startOGCatalogServer:antScript, primContext:= included cause: SmartFrogLifecycleException:: [sfDeploy] null cause: java.lang.NullPointerException SmartFrog 3.12.010 (2007-11-08 15:53:24) data: Failed object class: org.smartfrog.services.os.runshell= .RunShellImpl primSFCompleteName: HOST "9.76.153.205":rootProcess:sceptre:s= tartObjectGrid:startOGCatalogServer:antScript primContext: included reference: HOST "9.76.153.205":rootProcess:sceptre:startObjec= tGrid:startOGCatalogServer:antScript primContext: included SmartFrog 3.12.010 (2007-11-08 15:53:24) data: Failed object class: org.smartfrog.sfcore.workflow.combinator= s.Sequence primSFCompleteName: HOST "9.76.153.205":rootProcess:sceptre primContext: included reference: HOST "9.76.153.205":rootProcess:sceptre ____ Thanks so much, Jeff ------------------------------------------------------------------------- This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2005. http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ _______________________________________________ Smartfrog-users mailing list Sma...@li... https://lists.sourceforge.net/lists/listinfo/smartfrog-users |
From: Jeff G. <jef...@ya...> - 2007-11-26 15:50:46
|
Now getting this issue with scripts that work when running 008: _____________ 2007/11/26 10:47:15:265 EST [WARN ][main] SFCORE_LOG - SmartFrog security is NOT active - FAILED when trying DEPLOY of 'sceptre', [file:./resources/smartfrog/sfStartOG.sf], host:localhost Result: * Exception: 'SmartFrogLifecycleException:: [sfStart] HOST "9.76.153.205":rootProcess:sceptre:startObjectGrid cause: SmartFrogDeploymentException: unnamed component. SmartFrogLifecycleException:: [sfDeploy] null, cause: java.lang.NullPointerException, SmartFrog 3.12.010 (2007-11-08 15:53:24), data: Failed object class: org.smartfrog.services.os.runshell.RunShellImpl, primSFCompleteName: HOST "9.76.153.205":rootProcess:sceptre:startObjectGrid:startOGCatalogSe rver:antScript, primContext: included, reference: HOST "9.76.153.205":rootProcess:sceptre:startObjectGrid:startOGCatalogServer:antScript, primContext: included cause: SmartFrogLifecycleException:: [sfDeploy] null cause: java.lang.NullPointerException SmartFrog 3.12.010 (2007-11-08 15:53:24) data: Failed object class: org.smartfrog.services.os.runshell.RunShellImpl primSFCompleteName: HOST "9.76.153.205":rootProcess:sceptre:startObjectGrid:startOGCatalogServer:antScript primContext: included reference: HOST "9.76.153.205":rootProcess:sceptre:startObjectGrid:startOGCatalogServer:antScript primContext: included SmartFrog 3.12.010 (2007-11-08 15:53:24) data: Failed object class: org.smartfrog.sfcore.workflow.combinators.Sequence primSFCompleteName: HOST "9.76.153.205":rootProcess:sceptre primContext: included reference: HOST "9.76.153.205":rootProcess:sceptre ____ Thanks so much, Jeff |
From: Guijarro, J. <jul...@hp...> - 2007-11-12 23:12:45
|
Hello Everyone! We are pleased to announce a new release of SmartFrog, release 3.12.010 https://sourceforge.net/project/showfiles.php?group_id=3D87384&package_id= =3D108447&release_id=3D552931 The main changes are -better diagnostics (we can detect duplicate smartfrog JARs on the classpath) -A new AntBuild component that can run complete Ant build files on target m= achines. The latter isn't included with any good documentation; this is still ongoin= g. Keep an eye on http://smartfrog.svn.sourceforge.net/viewvc/*checkout*/sm= artfrog/trunk/core/components/ant/doc/ant_readme.sxw As usual, feedback and bugreps to Jira and the mailing list. Have fun! SmartFrog Team. Changes since last release =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D There have been no major changes to the core SmartFrog engine or components= since the last release. We have added some diagnostics; any of the smartfr= og commands can be called with the -diagnostics option, which will print ou= t diagnostic information about the environment in which SmartFrog is runnin= g. There is now an AntBuild component in the sf-ant package, defined in /org/s= martfrog/services/ant/components.sf . This component can run an existing An= t build file inside a SmartFrog process, passing down properties and collec= ting results. Output is passed to the SmartFrog log infrastructure, and pro= perties from the build can be turned into attributes on a designated target= component. Build failures can be configured to terminate the component. If= the AntBuild component is terminated mid-build, a best-effort attempt will= be made to interrupt the build; if that does not halt the build within a s= pecified timeout, termination can be forced. This component enables you to integrate existing XML build files into a Sma= rtFrog managed deployment, potentially running a build file remotely. Users are requested to provide feedback, to help improve the functionality = of the component. We are particularly interested in improving termination, = failure and reporting. We also have to complete the documentation for this = component - please ask on the mailing list for the location in the subversi= on repository of the latest documentation. Release Notes - SmartFrog - Version 3.12.010 ** Bug * [SFOS-500] - Ant project properties arent remotely accessible ** Improvement * [SFOS-525] - move resourceloader logic into core * [SFOS-526] - move list operations into a central utils class * [SFOS-530] - add support for a vector of file references in the File= System class * [SFOS-534] - Add a standard way to SmartFrogTask to let other classe= s wait for a thread to finish ** New Feature * [SFOS-499] - Add a component to run a specific build file * [SFOS-536] - add ability to propagate the Ant properties to a remote= target * [SFOS-537] - Add version information to SmartFrogException * [SFOS-538] - Add diagnostics check for repeated jar file names in cl= asspath. ** Task * [SFOS-497] - async Ant execution needs tests * [SFOS-521] - Automated way to update avlEventServer in sfinstaller.v= m template file ** Sub-task * [SFOS-522] - Add new API to sfinstaller component to read the xmpp = servername and generate description. -- ----------------------- Hewlett-Packard Limited Registered Office: Cain Road, Bracknell, Berks RG12 1HN Registered No: 6905= 97 England |
From: Steve L. <ste...@hp...> - 2007-10-29 17:59:36
|
> -----Original Message----- > From: sma...@li... > [mailto:sma...@li...] On > Behalf Of Jeff Garratt > Sent: 25 October 2007 21:07 > To: sma...@li... > Subject: [Smartfrog-users] Issue with Sequence component > > I would like the commands in Test to execute sequentially, > terminating upon completion. I tried the following, no such luck. > Any suggestions to what I should be doing? > > Thanks, > > --------------------- > Test extends Sequence { > > projectDir TBD; > > installDrivers extends InstallOG { > destDir "c:/temp"; > } > > perf-test extends WinNTShellScript { > workDir PARENT:projectDir; > processName "ant"; > cmd ["ant -f sceptre-build.xml > test-amit-perf"]; > shouldTerminate true; > } > > } > > > sfConfig extends Compound { > sfSyncTerminate true; > > test extends Test { > projectDir "C:/temp"; > } > > } Sequence is very good at executing components that are marked up as to run as workflow components, meaning you set the attribute sfTerminate to true; and they terminate themselves after deployment. The shell script components are such components, though for historical reasons they use shouldTerminate, as you have noticed. What I don't know is what your InstallOG component does. Assuming it is just a copyfile, that one is not (currently) a workflow component. Instead it is designed to copy the file on startup and stay deployed; the derivative DeployByCopy component deletes the file when terminated, so there is clearly some value in having this design -sometimes. 1. I could do something about making copy more part of a workflow; file the issue on jira.smartfrog.org and I will add to the list of things I have to do. 2. If you are just trying to set things up for your ant run, we have a TestCompound component under /org/smartfrog/services/assertions/components.sf This is how we are do almost all of Ant's own functional tests. At its simplest, you can deploy a child named "action", then, once deployed, the #include "/org/smartfrog/services/assertions/components.sf" Test extends TestCompound { projectDir TBD; action extends InstallOG { destDir "c:/temp"; } tests extends WinNTShellScript { workDir PARENT:projectDir; processName "ant"; cmd ["ant -f sceptre-build.xml test-amit-perf"]; shouldTerminate true; } } There's lots of extra features in the TestCompound, not least of which are the testTimeout attribute, to specify a limit on how long the tests should take to run, and a condition which can be used so you can skip tests on unsupported platforms. The other nice feature is that the class can report the end of the test run to local or remote listeners, so we can run this from a remote junit process. In our testharness jar, there's a DeployingTestBase junit base class, which can deploy a test compound and wait for status updates, which are then turned into junit results public void testCaseTCNNonexistentHost() throws Throwable { expectSuccessfulTestRunOrSkip("org/smartfrog/test/system/components/ssh/scp/", "tcn_nonexistent_host.sf"); } public void testtcn_mismatched_file_listTest() throws Throwable { expectSuccessfulTestRun("org/smartfrog/test/system/components/ssh/scp/", "tcn_mismatched_file_list.sf"); } I would strongly encourage you to look at the TestCompound for setting up workflows related to testing, it being how we test most of our own code. That doesn't mean that copy can't be made more workflow friendly, though. 3. to run ant in its own process, I'd probably bypass the shell scripts and put together something from RunJava that did it, with more control over the JVM and the ant process. 4. One thing that is already on the todo list of julio or myself is something to run ant directly with a component that starts up ant and runs in in the current (or spawned) process: http://jira.smartfrog.org/jira/browse/SFOS-499 the result would look something like ReleaseBuild extends BuildFile { dir "/home/bob/release"; } build extends AntBuild { dir "/home/bob/release"; targets ["clean","release"]; properties []; } One advantage there is that we would be able to pull ant output into the SmartFrog log, retaining Ant's own log levels. The appropriate listener is already in the code, so it is only writing a replacement for Ant's own main class that needs to go in. This is in the list as minor "it would be nice", but if you have a more pressing need, it should not take that long to write, with a test or two. -steve ----------------------- Hewlett-Packard Limited Registered Office: Cain Road, Bracknell, Berks RG12 1HN Registered No: 690597 England |
From: Guijarro, J. <jul...@hp...> - 2007-10-29 17:02:18
|
To simplify the communication among SmartFrog users, we are merging the developers and support lists into one new list: =20 mailto:sma...@li... =20 All users registered with the old lists have now been manually registered in the new list.=20 To subscribe/unsubscribe: https://lists.sourceforge.net/lists/listinfo/smartfrog-users =20 =20 The archives for the new list are in: http://news.gmane.org/gmane.comp.java.smartfrog.user/cutoff=3D315 =20 =20 The archives of the previous lists will remain at: http://news.gmane.org/search.php?match=3Dsmartfrog =20 =20 New releases will be announced the "announce" and "users" lists. =20 For commit messages you can subscribe to: https://lists.sourceforge.net/lists/listinfo/smartfrog-checkins =20 =20 Best regards, =20 The SmartFrog team. |
From: Jeff G. <jef...@ya...> - 2007-10-25 20:06:48
|
I would like the commands in Test to execute sequentially, terminating upon= completion. I tried the following, no such luck.=0AAny suggestions to wha= t I should be doing?=0A=0AThanks,=0A=0A---------------------=0ATest extends= Sequence {=0A=0A projectDir TBD;=0A=0A installDrivers extend= s InstallOG {=0A destDir "c:/temp";=0A }=0A=0A = perf-test extends WinNTShellScript {=0A workDir PARENT:proj= ectDir;=0A processName "ant";=0A cmd = ["ant -f sceptre-build.xml test-amit-perf"];=0A shouldTer= minate true;=0A }=0A=0A}=0A=0A=0AsfConfig extends Compound {=0A = sfSyncTerminate true;=0A=0A test extends Test {=0A = projectDir "C:/temp";=0A }=0A =0A}=0A=0A--------------------= --------------------=0A=0A |
From: Guijarro, J. <jul...@hp...> - 2007-10-11 14:44:32
|
Test2 J |
From: Guijarro, J. <jul...@hp...> - 2007-09-16 21:35:37
|
Test, J |
From: Guijarro, J. <jul...@hp...> - 2007-08-28 12:52:17
|
=20 Test =20 =20 |