1. Summary
  2. Files
  3. Support
  4. Report Spam
  5. Create account
  6. Log in

Utility Index

From cloudflu

(Difference between revisions)
Jump to: navigation, search
(cloudflu-foam2vtk-after)
(cloudflu-clean)
Line 15: Line 15:
Below, all the ''CloudFlu'' utilities in alphabetical order are discussed.
Below, all the ''CloudFlu'' utilities in alphabetical order are discussed.
-
== cloudflu-clean ==
+
== [[cloudflu-clean]] ==
This utility aims to clean all the possible ''cloud'' resources ''CloudFlu'' users could acquire. When it works it does not take care about whether this or that resource in use or not. So, use this utility with caution. The best practice is to call it each time at the end of your session.
This utility aims to clean all the possible ''cloud'' resources ''CloudFlu'' users could acquire. When it works it does not take care about whether this or that resource in use or not. So, use this utility with caution. The best practice is to call it each time at the end of your session.

Revision as of 11:04, 31 December 2010

All the presented utilities have the following common set of options, namely :

  • --version show program's version number and exit.
    All the utilities have the same version number - library version. So, the easiest way to get known what is the version of the CloudFlu you are working with is to run a whatever CloudFlu utility with --version command-line option.
  • -h, --help show this help message and exit.
    Use this option to explore available command-line options, their default values and see corresponding usage example.
  • --debug turn on the debug information displaying.
  • --log-file=< file to write debug information >
  • --aws-access-key-id=< Amazon Key Id >
    Used for communication with Amazon Web Services. Check your Amazon account to obtain corresponding value.
  • --aws-secret-access-key=< Amazon Secret Key >
    Used for communication with Amazon Web Services. Check your Amazon account to obtain corresponding value.

Some of the command-line options are persistent, which means that it is possible to preset it globally from the corresponding preferences file. Such option will be highlighted by underlying. For example, the following common option are persistent one: --debug, --log-file, --aws-access-key-id and --aws-secret-access-key.

Below, all the CloudFlu utilities in alphabetical order are discussed.

Contents

cloudflu-clean

This utility aims to clean all the possible cloud resources CloudFlu users could acquire. When it works it does not take care about whether this or that resource in use or not. So, use this utility with caution. The best practice is to call it each time at the end of your session.

It has no particular command-line options except common one, mentioned above.

Usage : cloudflu-clean

cloudflu-cluster-ls

This utility lists all the running clusters. As result user would see corresponding identifiers (something like r-0d6c3467). User could start as many clusters as he needs; by cloudflu-cluster-ls utility user always can refresh what clusters are already run.

It has no particular command-line options except common one, mentioned above.

Usage : cloudflu-cluster-ls

cloudflu-cluster-rm

This utility terminates appointed cluster by its identifier. The cluster identifier can be obtained as result of a_cluster_id=`cloudflu-cluster-start` && echo ${a_cluster_id} command or extracted from output of cloudflu-cluster-ls utility.

Extra options :

  • --cluster-id=< cluster identifier >
    (read from standard input, if not given)

Usage example :

  • cloudflu-cluster-rm --cluster-id=r-0d6c3467 # removes particular cluster
  • cloudflu-cluster-rm r-0d6c3467 # even shorter
  • cloudflu-cluster-ls | cloudflu-cluster-rm # removes all running clusters

cloudflu-cluster-start

Starts the a new cluster; returns corresponding cluster identifier as its output.

Extra options :

  • --instance-type=< EC2 instance type : 'c1.xlarge' or 'm1.large', for example >
    To see the difference among available instance types check http://aws.amazon.com/ec2/instance-types. Note : CloudFlu works only with 64 bit instance types.
  • --image-id=< Amazon EC2 AMI ID >
    Image contains as operation system as pre-installed software packages necessary to run OpenFOAM (R) use cases. Note : different images are responsible for representation of different OpenFOAM (R) versions; check your preferences file to choose the proper one.
  • --number-nodes=< number of cluster nodes >
    Actually it is most crucial cluster configuration parameter; user need to make some estimations before assign a value. Important : increasing number of cluster nodes does not directly lead to increasing performance; case partitioning need to be balanced between cluster node performance and the cost of inter-node data exchange. (We will be glad if you share with us your experience on this account to give us some direction and rules on how to estimate the number of cluster nodes in a formal way).

Usage example : cloudflu-cluster-start --instance-type='c1.xlarge' --image-id='ami-ecf50385' --number-nodes=8

cloudflu-config

Finds out the best values for the key CloudFlu parameters to be able to interact with cluster in a more efficient way. Updates CloudFlu preferences file as its output. Had better run before the first CloudFlu usage and each time you change your location (move from country to county) or speed and quality of internet connection (3G, Ethernet, e.t.c )

Extra options :

  • --region=< cluster region : 'EU', ( us-east ), 'us-west-1', 'ap-southeast-1' or None >
    Cluster region means the nearest and the most efficient from access speed viewpoint Amazon data center. Users can precisely configure CloudFlu functionality to work with appointed data center. By choosing 'None' (default value) it is possible to ask this utility automatically recognize and setup the 'best' data center region.
  • --number-threads=< number of threads >
    CloudFlu heavily use multi-threading to increase its speed, reliability and efficiency. Defining a value for this option you could help this utility to start with the closest to optimal. Usually it is 3 for 3G connection and 4 for Ethernet like.
  • --precision=< algorithm precision, % >
    This value is taken into account for all algorithms involving into this utility, namely :
    • definition the 'best' cluster region;
    • definition the most efficient number of threads to use in CloudFlu data transfer algorithms;
    • definition of the most reliable seed size for data transferring.
  • --start-size=< start value for the search algorithm, bytes >
    Start value for the most efficient seed size definition algorithm
  • --solution-window=< initial solution window considered to, % >
    A window for efficiency seed size measuring and looking for the best among these measurements
  • --number-measurements=< number measurements to be done in the solution window >
    Number of measurements to be done before a decision could be taken

Usage example : cloudflu-config --start-size=200000 --solution-window=50 --precision=10 --number-measurements=8

cloudflu-credentials-deploy

Deploys the Amazon security credentials to the cluster to be able to perform corresponding CloudFlu functionality on the cluster side. Usually appears in the context of cloudflu-reservation-run and cloudflu-instance-extract utilities. For internal use only.

Extra options :

  • --aws-user-id=< AWS User ID >
    Uses preferences or ${AWS_USER_ID} environment variable, if not specified
  • --ec2-private-key=< EC2 Private Key >
    Uses preferences or ${EC2_PRIVATE_KEY} environment variable, if not specified
  • --ec2-cert=< EC2 Certificate >
    Uses preferences or ${EC2_CERT} environment variable, if not specified
  • --remote-location=< destination of the credendtials environemnt files >

Usage example : cloudflu-reservation-run | cloudflu-instance-extract | cloudflu-credentials-deploy

cloudflu-deploy

Deploys the CloudFlu itself into the cluster. This operation is performed each time for cluster acquisition to be able automatically use the latest possible version of CloudFlu functionality. Usually appears in the context of cloudflu-reservation-run and cloudflu-instance-extract utilities. For internal use only.

Extra options :

  • --production
    If enabled then it deploys the latest official CloudFlu version
  • --url=< package name or precise location to download from >
    'cloudflu', by default

Usage example : cloudflu-reservation-run | cloudflu-instance-extract | cloudflu-deploy

cloudflu-download

Downloads specified study data from cloud. This is one of the basic data exchange utility. Usually appears in context of cloudflu-upload-start and cloudflu-upload-resume utilities.

Extra options :

  • --study-name=< existing study name >
    Read from standard input, if not specified
  • --located-files=< the list of file paths inside the study >
    User can choose whether to download all data registered for the study or only appointed onces. This option is extremely useful to load the data into a predefined order (CloudFlu works asynchronously because of multi-threading usage)
  • --output-dir=< location of the output data >
    The same a 'study' name, if not specified
  • --fresh - replaces the download items even if they already exists if enabled (False, by default)
  • --wait - waits for the downloading items uploading completion if enabled (False, by default)
  • --remove - automatically removes from the study downloaded items if enabled (False, by default)

Usage example : cloudflu-download --study-name=< user study name >[ --located-files="<file path 1>|<file path 2>.."] --output-dir="./tmp"

cloudflu-files-clean

Cleans all the file related information from cloud. It works like shutdown for the cloud data storage functionality. Use this functionality with precautions.

Has no additional options.

Usage example : cloudflu-files-clean

cloudflu-foam2vtk

An example of --time-hook implementation for cloudflu-solver-process utility. It runs the native OpenFOAM foamToVTK utility for the given time-stamp. Cloudflu-solver-process utility, in its turn is responsible for invoking this hook script per each loaded time-stamp. Use this script if you would like automatically perform a post-processing on the calculating solver case.

Usage example : cloudflu-solver-process --study-name=${a_study_name} --output-dir="~/damBreak.out" --time-hook="cloudflu-foam2vtk"

cloudflu-foam2vtk-after

An example of --after-hook implementation for cloudflu-solver-process utility. It launches ParaView application for the given set of "vtk.*" files generated by cloudflu-foam2vtk utility.

Usage example : cloudflu-solver-process --study-name=${a_study_name} --output-dir="~/damBreak.out" --time-hook="cloudflu-foam2vtk" --after-hook="cloudflu-foam2vtk-after"

cloudflu-foam2vtk-before

An example of --before-hook implementation for cloudflu-solver-process utility. It runs native OpenFOAM (R) paraFoam utility for the given output solver case downloaded cloudflu-solver-process utility.

Usage example : cloudflu-solver-process --study-name=${a_study_name} --output-dir="~/damBreak.out" --before-hook="cloudflu-foam2vtk-before" --time-hook="cloudflu-foam2vtk"


Download Installation Guide Documentation Forum Your Ideas GuestBook Bug Tracker Feature Request Project Start Page
Personal tools