SourceForge has been redesigned. Learn more.
Close

Dynamically Creating Jobs

General
2011-05-20
2013-05-28
  • John Stauffer

    John Stauffer - 2011-05-20

    I'm just getting started with OddJob, but am very happy to find such a great tool.

    I've spent a few hours with the docs, but I'm not sure of the simplest/cleanest way to build a job  to manage operations in our cloud environment. An example of what I want to do is begin a processing sequence  by collecting log files from all of our cloud servers into a single location. I'm thinking something like this:

    Scheduler
        Sequence
            Run Job to get a list of servers
            Foreach server, add an instance of a parameterized job to the "Parallel Batch Move" job <-- How?
            Parallel Batch move
                 Job to move log files to central location
            Process batch files

    Is there a clean way of doing this? I didn't see any jobs that provide a way of adding child jobs, so that may not be the right way to think about it.

    Thanks for any hints you can give me.

    John

     
  • Rob Gordon

    Rob Gordon - 2011-05-21

    Hi,

    Something like this might be what you want:

    <?xml version="1.0" encoding="UTF-8" standalone="no"?>
    <oddjob id="this">
        <job>
            <scheduling:timer xmlns:scheduling="http://rgordon.co.uk/oddjob/scheduling">
                <schedule>
                    <schedules:interval interval="00:00:10" xmlns:schedules="http://rgordon.co.uk/oddjob/schedules"/>
                </schedule>
                <job>
                    <sequential>
                        <jobs>
                            <variables id="vars">
                                <servers>
                                    <buffer/>
                                </servers>
                            </variables>
                            <echo text="Job that writes server names to file server-list.txt"/>
                            <copy>
                                <from>
                                    <file file="${this.dir}/server-list.txt"/>
                                </from>
                                <output>
                                    <value value="${vars.servers}"/>
                                </output>
                            </copy>
                            <foreach id="servers">
                                <values>
                                    <value value="${vars.servers.lines}"/>
                                </values>
                                <configuration>
                                    <arooa:configuration xmlns:arooa="http://rgordon.co.uk/oddjob/arooa">
                                        <file>
                                            <file file="${this.dir}/each-server.xml"/>
                                        </file>
                                    </arooa:configuration>
                                </configuration>
                            </foreach>
                        </jobs>
                    </sequential>
                </job>
            </scheduling:timer>
        </job>
    </oddjob>

    Where the foreach configuration is:

    <parallel continue="true">
        <executorService>
            <value value="${this.services.oddjobExecutors.poolExecutor}"/>
        </executorService>
        <jobs>
            <sequential>
                <jobs>
                    <echo
                        text="echo Command To Pull Files from ${servers.current}"/>
                    <echo text="Command To Store files from ${servers.current}"/>
                </jobs>
            </sequential>
        </jobs>
    </parallel>

    In trying this example I found a bug with foreach where the executor service isn't configured automatically on the parallel job, which is why there is this ungamely setting of the executorService property.

    Hope this helps.

    Rob.

     
  • John Stauffer

    John Stauffer - 2011-05-23

    Hi Rob -

    Thanks for the quick reply. That does the trick.

    I wrapped <foreach> with <state:join> to ensure that all of the nested jobs finished before starting the next step.

    Thanks,
    John

     

Log in to post a comment.