Looking for the latest version? Download crush-2.34-3.tar.gz (5.4 MB)
Name Modified Size Downloads / Week Status
2.34-3 2017-04-28 11 weekly downloads
2.34-2 2017-02-22 0
2.34-1 2017-01-20 0
2.33-1 2016-10-22 0
2.32-1 2016-06-29 0
2.31-1 2016-04-20 0
2.30-4 2015-12-14 0
2.30-3 2015-10-05 0
2.30-2 2015-10-04 0
2.30-1 2015-09-22 0
2.23-1 2015-06-16 0
2.22-1 2015-04-02 0
2.21-1 2015-01-13 0
2.20-3 2014-08-31 0
2.20-2 2014-08-06 0
2.20-1 2014-07-24 0
2.16-3 2014-07-02 0
2.16-2 2014-06-19 0
2.16-1 2014-06-02 0
change.log 2015-12-14 214.1 kB 0
README 2015-12-14 107.6 kB 0
Totals: 21 Items   321.7 kB 1
*** A Guide to Using CRUSH-2. *** Attila Kovacs <attila[AT]submm.caltech.edu> Last updated: 24 April 2015 Table of Contents ================= 1. Getting Started 1.1 Installation 1.2 The Tools of the CRUSH Suite 1.3 Quick Start 1.4 A Brief Description of What CRUSH Does and How... 1.5 Command-line Options and Scripting Support 2. Advanced Topics 2.1 Filtering corrections, transfer functions, etc. 2.2 Making Sense of the Console Output 2.3 Pointing Corrections 2.4 Recovery of Extended Emission 2.5 Pixellization and Smoothing 2.6 Image Processing Post-Reduction 2.7 Reducing Very Large Datasets (to do...) 2.8 Custom Logging Support 3. Advanced Configuration 3.1 Commands 3.2 Pipeline Configuration 3.3 Source (Map) Configuration 3.4 Scan Configuration 3.5 Instrument Configuration 4. Correlated Signals 4.1 Modes and Modalities 4.2 Removing Correlated Signals 4.3 User-defined Modalities 6. Examples 7. Future Developments (A Wish List...) 7.1 Support for Heterodyne Receivers and Interferometry 7.2 Interactive Reductions 7.3 Distributed and GPU Computing 7.4 Graphical User-Interface (GUI) 8. Further information ##################################################################### 1. Getting Started ##################################################################### CRUSH-2 is a reduction and imaging package for various astronomical imaging arrays. It is based on the development centred on SHARC-2 (crush-1.xx) as well as a reincarnation for the APEX bolometers (minicrush). Version 2 provides, for the first time, a unified platform for reducing data from virtually any high-background bolometer instrument. Currently, it supports the following instruments (in alphabetical order): ASZCA (2mm) APEX SZ Camera, from Berkeley, CA bolo.berkeley.edu/apexsz/instrument.html GISMO (2mm) Goddard-IRAM Superconducting 2-Millimeter Observer www.iram.es/IRAMES/mainWiki/ GoddardIramSuperconductingTwoMillimeter LABOCA (870um) Large APEX Bolometer Camera www.apex-telescope.org/bolometer/laboca MAKO (350um) KID technology demonstration camera for the CSO. MAKO-2 (350um, 850um) Second-generation KID technology demonstration camera for the CSO with dual-pol pixel response, and dual-band imaging capability. MUSTANG-2 (3mm) Large focal plane array for the 100m Greenbank Telescope. www.gb.nrao.edu/mustang/ p-ArTeMiS (200um, 350um, 450um) 3-color camera for APEX (prototype) www.apex-telescope.org/instruments/pi/artemis PolKA (polarimetry) Polarimeter for LABOCA www.mpifr-bonn.mpg.de/staff/gsiringo/laboca/ laboca_polarimeter.html SABOCA (350um) Submillimeter APEX Bolometer Camera www.apex-telescope.org/bolometer/saboca SCUBA-2 (450um, 850um) Submillimetre Common User Bolometer Array 2 www.roe.ac.uk/ukatc/projects/scubatwo SHARC (350um, 450um) Submillimeter High-Angular Resolution Camera Caltech, Pasadena, CA http://www.submm.caltech.edu/cso/sharc/cso_sharc.html SHARC-2 (350um, 450um, 850um) The second-generation Submillimeter High-Angular Resolution Camera Caltech, Pasadena, CA www.submm.caltech.edu:/~sharc SOFIA/HAWC+ (53um, 62um, 89um, 155um, 216um) High-resolution Airborne Wide-angle Camera Further support for instruments is possible. If interested to use CRUSH for your bolometer array, please contact Attila Kovacs <attila[AT]submm.caltech.edu>. In principle, it is possible to extend CRUSH support for other types of scanning instruments, like heterodyne receiver arrays, or single-pixel receivers that scan in frequency space, or for the reduction of line-surveys from double side-band (DSB) receivers. Such new features may appear in future releases, especially upon specific request and/or arrangement... 1.1 Installation ================ Install Java (if not already installed) --------------------------------------- Download Java, e.g. from www.java.com. If you already have Java, check that it is version 1.6.0 (a.k.a. Java 6) or later, by typing: > java -version Note, that The GNU java a.k.a. gij (default on some older RedHat and Fedora systems) is painfully slow and unreliable, and will not always run CRUSH correctly. If you need Java, you can download the latest JRE from wwww.java.com Download CRUSH -------------- You can get the latest version of CRUSH from: http://www.submm.caltech.edu/~sharc/crush Linux users can grab one of the distribution packages to install CRUSH via a package manager. Packages are provided for both RPM-based distributions (like Fedora, Redhat, CentOS, or SUSE), and Debian-based distributions (e.g. Ubuntu, Mint). Both will install CRUSH in '/usr/share/crush'. Note, if you use one of these packages you will need root privileges (e.g. via 'sudo') on the machine. Unprivileged users should install using the tarball. For all others, CRUSH is distributed as a gzipped tarball (or as a zip archive). Simply unpack it in the directory, where you want the crush to reside. A. Installation from tarball (POSIX/UNIX, incl. Mac OS X) --------------------------------------------------------- Unpack the tarball in the desired location (e.g. under '~/astrotools/'): > cd ~/astrotools > tar xzf crush-2.xx-x.tar.gz Verify that CRUSH works: > cd crush > ./crush You should see a brief information screen on how to use CRUSH. B. Installation via Linux packages ---------------------------------- Use the package manager on your system to install. E.g., on RPM-based systems (Fedora, RHEL, CentOS, SUSE) you may type: > sudo yum localinstall --nogpg crush-<version>-noarch.rpm On Debian based systems (e.g. Ubuntu, Mint) the same is achieved by > sudo apt-get install crush-<version>.deb Or, you may simply double click the package icon from a file browser. C. Windows Installation ----------------------- Extract the ZIP archive into the desired location. To run CRUSH under Windows, copy/move the .bat files from under the 'windows' subdirectory into the main crush directory. Additionally, you may want to add the crush installation directory to your path (in 'autoexec.bat') to run crush from outside of its own directory. In this case you should also set the CRUSH variable to point to the installation directory (to do this add a line like: set CRUSH=<the-crush-directory> in your 'autoexec.bat'. A note of caution: while Windows is generally case insensitive, this is not necessarily the case for Java paths. Therefore, when setting paths, such as 'datapath', it is advised that drive letters appear capitalized, e.g. as 'C:\data', as well as anything else that might warrant it (e.g. "My Documents"). (optional) Java Configuration ----------------------------- CRUSH ships with a default Java configuration. On the most common UNIX platforms (Linux, Mac OS X, BSD, and Solaris), it will automatically attempt to set an optimal configuration. On other platforms, it comes with fail-safe default values (default java, 32-bit mode and 1GB of RAM use). To override the defaults on Windows, edit 'wrapper.bat' directly. On all other platforms, you can override the defaults by placing your settings in arbitrary files under /etc/crush2/startup or ~/.crush2/startup. (Any settings in the user's home under ~/.crush2/startup will overrride the system-wide values in /etc/crush2/startup. If multiple config files exist in the same location, these will be parsed in non-specific order). E.g., placing the following lines in ~/.crush2/startup/java.conf overrides all availablei runtime configuration settings: JAVA="/usr/java/latest/bin/java" DATAMODEL="64" USEMB="4000" JVM="-server" EXTRAOPTS="" Upon startup CRUSH will find and apply these settings, so it will use "/usr/java/latest/bin/java" as your to run CRUSH, in 64-bit mode, with 4GB of RAM, using the HotSpot 'server' VM, with no extra Java options. (Note, these config files are parsed as bash scripts, so you may use other bash commands too, such as printing earlier settings via 'echo') Below is a guide to the variables that you can override to set your own Java runtime configuration: JAVA Set to the location of the Java executable you want to use. E.g. "java" to use the default Java, or "/usr/java/latest/bin/java" to use the latest from Oracle or OpenJDK. DATAMODEL Set to "32" or "64", to select 32 or 64-bit mode. To use 64-bit mode you will need both a 64-bit OS and a 64-bit JRE (Java Runtime Environment) installation. USEMB Set to the maximum amount of RAM (in MB) available to CRUSH. E.g. "4000" for 4GB. Note, that when DATAMODEL is "32", you this value must be somewhere below 2000. Thus, "1900" is a good maximum value to use in 32-bit mode. JVM Usually set to "-server" for Oracle or OpenJDK. If using IBM's Java, set it to "" (empty string). On ARM platforms, you probably get better performance using "-jamvm" or "-avian". o see what VM options are available, run 'java -help'. The VM options are listed near the top of the resulting help screen. EXTRAOPTS Any other non-standard options you may want to pass to the Java VM should go here. CRUSH Configuration ------------------- The preferred way of creating user-specific configurations for CRUSH is to place your personalized configuration files inside a '.crush2/' configuration directory within your home. This way, your personalized configurations will survive future updates to CRUSH. You can create/edit your default configuration by editing 'default.cfg' (either in the installation folder or '~/.crush2', or in the appropriate instrument subdirectory within it. As an example a user configuration for LABOCA data can be placed into '~/.crush2/laboca/default.cfg', with the a content: datapath ~/data outpath ~/images project T-79.F-0002-2006 This tells crush that when reducing LABOCA data it should look for raw data files in '~/data', write all output to '~/images', and specifies the default project to be 'T-79.F-0002-2006' The tilde character '~' is used as a shorthand for the home directory (similarly to UNIX shell syntax). In your configuration you can also refer to environment variables or other settings (see more about it further below). Of course, you can create any number of configurations, name them as you like place them where you like (practical if you have many data locations, projects etc.). You can easily invoke configuration files as needed via > crush [...] -config=<path-to-myconfig> [...] 1.2 The Tools of the CRUSH Suite ================================ CRUSH provides a collection of other useful tools. Here's a short description of what is there and what they are used for. Each tool, when run without options (or with the -help option) will provide you a list of available options on the console. crush The principal reduction tool. imagetool A tool for manipulating images. Can also deal with images produced by BoA (and to some degree other images also). show A simple display program for the FITS files, with various useful functions for simple analysis and image manipulation After starting press 'h' for help. coadd Add FITS images together. Use as a last resort tool as it is always better to reduced scans together. difference Allows to look at the difference between two images. histogram Write a histogram of the pixel distribution of an image plane. (e.g. 'flux', 'rms', 's2n', 'weight', or 'time'). detect A source extraction tool for maps. You should make sure that the noise is close enuogh to Gaussian (e.g. with 'histogram') before relying on this. esorename Rename ESO archive files to their original names. 1.3 Quick Start =============== To reduce data, simply specify the instrument (E.g. 'sharc2', 'laboca'...) and the scan numbers (or names of the scan files or directories) to crush. (You may have to add './' before 'crush' if the current directory '.' is not in your path.) E.g., > crush laboca 10059 will reduce LABOCA scan 10059 with the default reduction parameters. You can specify additional options. Some of these apply to the reduction as a whole while others will affect the scan processing for those scan numbers that are listed *after* the option flag. If you are new to CRUSH (or used version 1.xx before), you should be able to start working with it by the time you get to the end of this page. Nevertheless, I recommend that you read on through the entire Sections 1--2 (Getting Started & Basic Configuration), and you will become a truly well-versed user. :-) Once you get the hang of it, and feel like you need more tweaking ability, feel free to read on yet further to see what other fine tuning capabilities exist... Here are some quick tips: * Reduce as many scans together as you can. E.g. > crush laboca 10657-10663 10781 11233-11235 [...] * You can specify opacities, pointings, scaling etc, for each of the set of scans listed (See section for Scan-specific options). E.g., > crush laboca -tau=0.32 10657-10663 10781 -tau=0.18 11233 [...] will use a zenith tau value of 0.32 for 10657-10663 and 10781, and 0.18 for the last scan (11233). * If you suspect that you are missing extended emission (see section on the Recovery of Extended Structures), then you can specify the 'extended' option, which will better preserve large scale structures albeit at the price of more noise. E.g. > crush laboca -extended 10657-10663 * If your source is faint (meaning S/N < 10 in a single scan), then you may try using the 'faint' option. E.g. > crush laboca -faint 10657-10663 or, > crush laboca -faint -extended 10657-10663 to also try preserve extended structures (see above). * For finer control of how much large scale structures are retained you can use the 'sourcesize' option, providing a typical source scale (in arcsecs) that you'd like to see preserved. Then CRUSH will optimize its settings to do the best it can to get clean maps while keeping structures up to the specified scales more or less intact. E.g. > crush [...] -sourcesize=45.0 10657-10663 Will optimize reduction for <~ 45 arcsec sources. * Finally, if your sources are not only faint, but also point-like, you should try the 'deep' option. This will use the most aggressive filtering to yield the cleanest looking maps possible. E.g., > crush laboca -deep 10657-10663 With just these few tips you should be able to achieve a decent job in getting the results you seek. CRUSH can also help you reduce pointing/calibration, skydip, and beammap scans. E.g.: * To reduce a laboca pointing/calibration scan 11564: > crush laboca -point 11564 At the end of the reduction CRUSH will suggest pointing corrections and provide detailed information on the source (such as peak and integrated fluxes, FWHM and elongation). Once you determine the appropriate pointing corrections for your science scans, you can apply these via the 'pointing=x,y' option. E.g.: > crush laboca -pointing=3.2,-0.6 12069 * You can also reduce skydips, for determining in-band atmospheric opacities. E.g.: > crush laboca -skydip 26648 * Finally, you can derive pixel position information from appropriate scans, which are designed to ensure the source is moved over every detector, in a fully sampled manner. To reduce such beam-maps: > crush aszca -beammap 5707 CRUSH will write rcp files as output, containing pixel positions source and sky gains in a standard 5 column APEX RCP format. You can feed the pixel position information to reduce other scans via the 'rcp' option. There are a lot more fine-tuning possibilities for the more adventurous. If interested, you can find documentation further below, as well as in the README files inside the instrument sub-directories. For a complete list of crush options, please refer to the GLOSSARY. 1.4 A Brief Description of What CRUSH Does and How... ===================================================== CRUSH is a pipeline reduction that is principally meant to remove correlated signals (correlated noise) in the time-streams to arrive at clean & independent bolometer signals, which are then used to make a source model (usually an image). As such it is not an interactive reduction software (e.g. as opposed to e.g. BoA). The term 'scripting' in CRUSH mainly means defining configuration options (in the command line or through configuration files) which are parsed in the order they are read. During the reduction CRUSH aims to arrive at a consistent set of solutions for various correlated signals, the corresponding gains and the pixel weights, as well as tries to identify and flag problematic data. This means a series of reduction steps, which are iterated a few times until the required self-consistent solutions are arrived at. To learn more about the details please refer to Kovacs, A., "CRUSH: fast and scalable data reduction for imaging arrays," Proc. SPIE 7020, 45, (2008). If that does not satisfy your curiousity, then you can find yet more explanation in Kovacs, A., PhD thesis, Caltech (2006). 1.5 Command-line Options and Scripting Support ============================================== Basic Rules ----------- Configuration of CRUSH is available either through command line options or via scripting. You have seen scripting already in the form of 'default.cfg', which stores the default configuration values (Sec. 1.1). Both command-line options and scripting are organized in key/value pairs. The main difference is that the command-line (bash is assumed in UNIX platforms!) imposes restrictions on syntax. E.g., no white spaces, or brackets are allowed (unless you place these in quotes). Thus, keys and values may be separated either by '=', ':' or empty spaces (or even a combination of these). Command line options start with a dash '-' in front. Thus, what may look like: key1 value1 key2 value2, value3 in a configuration script, will end up as > crush [...] -key1=value1 -key2=value2,value3 [...] on the command line. Otherwise, they two ways of configuring are generally equivalent to one-another. One exception to this rule is reading scans, which is done via the 'read' key in a script, but on command line you can simply list the scan number (or ranges or lists or names). I.e., [...] read 10056 # in script > crush [...] 10056 # on the command line. In the next section you'll find a description of the scripting keywords. Now that you know how to use them also as command line options, you can choose scripting or command-line, or mix-and-match them to your liking and convenience... Key/value pairs are parsed in the order they are specified. Thus, each specification may override previously defined values. Lines that start with '#' designate comments that are ignored by the parser. Unsetting and Resetting Options ------------------------------- Additionally, there are a few special keys (or rather commands that have the same syntax as keys), that are used to unset/reset options. The command 'forget' can be used to unset keys. Forgotten options can be reinstated to their prior values via the 'recall' command. E.g.: forget tau,smooth Will unset the tau value and disable despiking. To later specify a new tau and to reenable despiking with its previous setting, you may (say on the command line this time): > crush [...] -tau=0.30 -recall=smooth As you can see, forgotten keys may be reconfigured at any time. There is also a way to permanently remove a configuration key via the 'blacklist' command. It works just like 'forget' except that all blacklisted keys will be ignored until these are 'whitelist'-ed. Note, that 'whitelist' only removes keys from the blacklist, but does NOT reinstate them -- this you have to do explicitly afterwards either using 'recall' or by specifying a new value). blacklist smooth smooth=8.0 # Has no effect, because smooth is blacklisted! [...] whitelist smooth # Allows smooth to be set again... smooth=6.5 # Sets the smoothing to 6.5" All blacklisted settings can be cleared by: forget blacklist Branched Configuration ---------------------- Options may also have branches, helping to group related keys together in a hierarchy. Branches are separated by periods, e.g. despike despike.level 6.0 despike.method absolute defines a despiking at 6-sigma, while the 'method' subkey selects the despiking method used. It is possible to unset/reset entire branches with the commands 'remove' and 'restore', much the same way as 'forget' and 'recall' operate on individual settings. Thus, > crush [...] -remove=despike [...] unsets both 'despike', 'despike.level' and 'despike.method' and all other existing branches of 'despike'. Similarly, 'restore' reinstates the full brach to its original state. Conditionals ------------ CRUSH also provides the capability for conditional configuration. Conditions are formally housed in square brackets. The syntax is: [condition] key=value The basic condition is the existence of an option key. For example, [extended] forget correlated.gradients will disable the decorrelation of gradient sky-noise if or when the 'extended' key is defined, e.g. by > crush [...] -extended [...] Note, that if 'extended' is later unset (e.g. via 'forget', 'remove' or 'blacklist') from the configuration it will not undo settings that were conditionally activated when 'extended' was defined. You can also make settings activate conditioned on another key having been set to a given value. The syntax for that is: [key1?value1] key2=value2 The above statement specifies that if key1 is set to value1, then key2 will be set to value2. (Again, the '=' is optional...) Other conditions are interpreted by CRUSH, such settings based on date (or MJD) or serial number, such as: mjd.[54555-54869] rcp {$CRUSH}/laboca/laboca-3.rcp which loads the pixel positions (RCP) 'laboca-3.rcp' inside the 'laboca' subfolder in the crush installation for scans taken in the MJD range 54555 to 54869. Similarly, date.[2007.02.28-2009.10.13] instrument.gain -1250.0 or serial.[14203-15652] flag 112-154,173 are examples of setting activated based on dates or scan serial numbers. As of version 2.05, you may also specify conditional settings based on the source names under the 'object' branch. E.g.: object.[Jupiter] bright will invoke the 'bright' configuration for Jupiter. The check for the source name is case insensitive, and matches all source names that begin with the specified sequence of characters. Thus, the SHARC-2 configuration line: object.[PNT_] point will perform a pointing fit at the end of the reduction for all sources whose catalog names begin with 'PNT_' (e.g. PNT_3C345). These examples also demonstrates that conditionals can be branched just like options. (In the above cases, the conditions effectively reside in the 'mjd', 'date' or 'serial' branches). Other commonly used conditionals of this type are iteration based settings: > crush [...] -iteration.[last-1]whiten=2.0 [...] will activate the noise whitening in the iteration before the last one. The use of branched conditions can be tricky. For interpreted conditions a key (e.g. 'iteration', 'mjd', 'serial' or 'date') defines the rule by which the condition is interpreted. As such the square brackets should always follow afterwards. For simple conditions, which are based on the existence of configuration keys, the placement of brackets matters for how the conditional statement is interpreted. E.g., the condition: [key.subkey] option=value is not equivalent to key.[subkey] option=value The second line assumes that 'option' is also a branch of 'key', so it actually sets the 'key.option' conditional on 'key.subkey'. In other words the following would be truly equivalent statements: key.[subkey] option=value = [key.subkey] key.option=value Here is an example to illustrate the difference with actual settings: [source.model] blacklist clip blacklists 'clip' whenever a source model is defined (via 'source.model'). On the other hand, source.[model] filter can be used to activate the 'source.filter' when 'source.model' is defined. It is possible to clear all conditional settings by: forget conditions Aliases ------- CRUSH allows you to create your own shorthands for convenience, using the alias key. Some shorthands are predefined for convenience. For example, one may prefer to simply type 'array' instead of 'correlated.obs-channels' (referring to the common mode signals seen by the group of observing channels). This shorthand is set (in 'default.cfg') by the statement: alias.array correlated.obs-channels Thus, the option: array.resolution=1.0 is translated by CRUSH into correlated.obs-channels.resolution=1.0 Aliases are literal substitutions. Therefore, they can also be used to shorthand full (or partial) statements. In this way 'altaz' is is defined to be a shorthand for 'system=horizontal'. You can find this definition (in 'default.cfg') as: alias.altaz system horizontal Finally, conditions can be also aliased. An example of this is the preconfigured alias (also in 'default.cfg') 'final', which is defined as alias.final iteration.[last] Thus the the command line option '-final:smooth=beam' is equivalent to the typing '-iteration.[last]smooth=beam'. (The ':' serves as a way for separating conditions from the setting on the command line, where spaces aren't allowed unless placed in a literal quote.) References ---------- As of version 2.12, CRUSH allows both static and dynamics references to be used when setting configuration values. All references are placed inside curly brackets. After the opening '{' a special character is used to define what type of reference is used: Table. Referencing -------------------------------------------------------------- Description Symbol Example ============================================================== Static reference to another & {&datapath} configuration value/property -------------------------------------------------------------- Dynamic reference to a configuration value/property ? {?tau.225GHz} -------------------------------------------------------------- Shell environment variable @ {@HOME} -------------------------------------------------------------- Java property # {#user.home} -------------------------------------------------------------- Thus, for example, you can set the path to the output data (e.g. images) relative to the raw (input) data (which is specified by datapath. E.g.: outpath = {&datapath}/images So, if your 'datapath' was set to '/data/myobservations', then CRUSH will write its output into '/data/myobservations/images'. You could have also used: outpath = {?datapath}/images also. The difference is that the former is evaluated only once, when the statement is parsed, substituting whatever value 'datapath' had at that particular point. In contrast, the latter, dynamic statement is evaluated every time it is queried by CRUSH, always substituting the current value of 'datapath'. While the two forms can have effectively identical results if 'datapath' remains unchanged, there are particular scenarios when you might need one or the other form specifically. Here are two examples: * Static reference: Suppose you want to amend a previously defined value. For example, you want to read data from a sub-directory of the current datapath. This requires the new datapath to refer back to its prior value. If it is done with a dynamic reference, it will result in an infinite recursion. Therefore, you will always want to use static self-references: datapath = {&datapath}/jan2012 * Dynamic reference: Suppose you want to refer to a value that has not yet been defined. An example would be to try write output into sub- folders by object name (e.g. for GISMO, where object name is usually defined for locating scans). Then, you would write: datapath = /home/images/{?object} So by setting this statement ahead of time (e.g. in a global configuration file), it can always have its desired effect. In addition to querying configuration settings the same syntax can be used to look up other CRUSH properties. Currently, the following are defined: instrument The name of the instrument providing the data. version The CRUSH version (e.g. 2.13-1). fullversion The CRUSH version including extra revision information. E.g. '2.13-1 (beta1)' Thus you can, for example set the output directory by instrument name and CRUSH version, E.g.: outpath = {&outpath}/{&instrument}/{&version} So, if you took data with LABOCA and reduced with CRUSH 2.13-1, then the output will go to the 'laboca/2.13-1' subfolder within the the directory specified by 'outpath'. Path Names ---------- Path names generally follow the rules of your OS. However, in order to enable platform independent configuration files, the UNIX-like '/' is always permitted (and is generally preferred) as a path separator. As a result, you should avoid using path names that contain '/' characters (even in quoted or escaped forms!) other than those separating directories. Since CRUSH allows the use of environment variables when defining values (see above), you can use {@HOME} for your UNIX home directory; or {@HOMEPATH} for your user folder under Windows. Or, {#user.home} has the same effect universally, by referring to the the Java property storing the location of the user's home folder. The tilde character '~' is also universally understood to mean your home folder. And, finally, the UNIX-like '~johndoe' may specify johndoe's home directory in any OS as long as it shares the same parent directory with your home folder (e.g. both johndoe and your home folder reside under a common '/home' or 'C:\Documents and Settings'). Thus, in UNIX systems (including MacOS), you may use: datapath={@HOME}/mydata # environment variables datapath={#user.home}/mydata # Java properties datapath=~/mydata # the '~' shorthand datapath=~johndoe/data # relative to another user datapath=/mnt/data/2010 # Fully qualified path names while in Windows any of the following are acceptable: datapath="~\My Data" # Using the '~' shorthand datapath="{$HOMEPATH}\My Data" # Using environment variables datapath="{#user.home}\My Data" # Java properties datapath=D:/data/Sharc2/2010 # UNIX-style paths datapath=D:\data\Sharc2\2010 # proper Windows paths datapath="~John Doe\Data" # relative to another user Wildcards --------- Thanks to the branching of configuration keys, wildcards '*' can be used to configure all existing branches of a key at once. E.g.: correlated.*.resolution=1.0 Will set the 'resolution' for every currently defined 'correlated' modality. Thus, if you had the common 'obs-channels' and 'gradients' modes defined as well as an instrument mode, say 'cables', then the above is equivalent to: correlated.obs-channels.resolution 1.0 correlated.gradients.resolution 1.0 correlated.cables.resolution 1.0 Additionally, wildcards can be used with the 'forget', 'blacklist' or 'remove'. E.g.: forget despike.* clears all sub-settings of 'despike' (while keeping despiking enabled, if it already was). Checking Configuration State ---------------------------- You can check the current configuration using the 'poll' command. Without an argument it lists the full current configuration, e.g.: > crush [...] -poll lists all the currently active settings at the time of polling, as well as all settings that can be recalled (i.e. were unset using 'forget'). The comlete list of settings can be long, especially when you just want to check for specific configurations. In this case, you can specify an argument to poll, to lists only settings that start with the specified pattern. For example, you care only to check despiking settings (for the first round of despiking). Then, you would type: > crush [...] -poll=despike Or, you can also type: > crush [...] -despike.poll The difference is that the first method lists all configuration keys from the root of the configuration tree that start with the word 'despike', wereas, the second example lists the settings in the subtree of the 'despike' key (hence without 'despike' appearing in in the list). You can also check on conditional statements similarly. E.g.: > crush [...] -conditions or, > crush [...] -conditions=date for a selected list of conditions starting with 'date', or > crush [...] -date.conditions for the subtree of conditions under the 'date' key (without preprending the 'date' key itself). Finally, you can also check what settings are currently blacklisted using the 'blacklist' command, without an argument. E.g.: > crush [...] -blacklist or, > crush [...] -despike.blacklist to check only for blacklisted settings under the despike branch. Startup Configuration --------------------- At launch, crush will parse 'default.cfg' to establish a default configuration set. All configurations files are first parsed from the files in the crush directory, then additional options or overriders are read also from the appropriate instrument subdirectories, if exist. After that, crush will check if these configuration files also exists under the inside the '~/.crush2' directory of the user. If so, they will be used to override again. See more on this in the next section under the 'config' option. In this way, the '~/.crush' directory can be used as a convenient way to create user specific setting and/or overrides to global defaults. ##################################################################### 2. Advanced Topics ##################################################################### 2.1 Filtering corrections, transfer functions, etc. =================================================== Most reduction steps (decorrelation, 1/f drift removal, spectral filtering) will impact different spatial frequencies of the source structure differently. In general, the low spatial frequency components are most affected (suppressed). E.g. decorrelating the full array will reject scales larger than the field-of-view, while 1/f filtering will result in 1/f spatial filtering of the map due to the scanning motion, which more or less directly maps temporal frequencies into spatial frequencies. The approach of CRUSH is to apply appropriate point-source corrections (rescaling) such that point sources in the map yield the same fluxes no matter how (i.e. with what options exactly) the source was reduced. While the corrections will be readily sufficient for a large fraction of the science cases in this simple form, there are two notable exceptions to this rule: (i) extended emission, and (ii) small, fast maps reduced in 'deep' mode (when the source is scanned over the same detector more than once over the 'stability' timescale). The better approach for these cases is to measure a transfer function, or otherwise check the reduction of a similar simulated source. Transfer functions and simulated sources ---------------------------------------- The 'sources' option provides a means for inserting test sources into CRUSH maps, while one of the 'jackknife' options can be used to remove any real emission beforehand but retaining the signal and noise structure of the data otherwise (which is important in order to get a representative measure of the transfer function). E.g. > crush [...] -jackknife.alternate -sources=test.mask ... will apply an alternating jackknife to the input scans, and insert sources specified in the mask file 'test.mask' (See 'example.mask' on the format of mask files, and the GLOSSARY for more on jackknifing options). To measure transfer functions (i.e. complete spatial characterization of the point-source response) you would want to insert a single beam-sized point source. Alternatively, you can insert one or more Gaussian-shaped source(s) with a range of FWHMs to create a simulated source structure that resembles the structure you expect your source(s) to have. Make sure your test source is bright enough to see with high S/N, but not too bright to trigger unintended flagging, or despiking. In general, a S/N between 100 and 1000 should be fine for default reductions, and 100 to 300 for 'faint' or 'deep' modes. Additionally, in 'faint' or 'deep' modes, you may want to disable some features which may affect your relatively bright test sources differently than your much fainter real science target(s). Thus, the following are recommended for reducing 'faint' and 'deep' test sources: > crush [...] -blacklist=clip,blank,source.filter.blank Note also that the spatial filtering (transfer function) will be varying with location on the map (since the scanning speed and directions will themselves be non-uniform over the map). Therefore, it is strongly recommended that test sources are inserted near the same locations as the real sources in the field. 2.2 Making sense of the console output ====================================== Say you are reducing two scans with the command: ./crush laboca -deep 12065 12066 The output begins with some minimalistic capture of the scans that are read in. This is reasonably straighforward. Then, you'll get some information on the type of reduction that has been selected by you. This refers to the brighness and extent of the source. In this case, since '-deep' was specified without a further specification of source size, the reduction will assume deep-field-type sources that are point like. After this, some basic information is given on the source and the map parameters. At last, the reduction enters its main iterative phase. You'll see some cryptic words and some numbers in a row. Each letter corresponds to an incremental modeling of the time-series signals, while the integer numbers tell you how many pixels remain unflagged after a step which can discard 'funky' pixels. Here's what the various bits stand for: Inside brackets: ---------------- [nnnnn|m]: processing scan nnnnn, subscan m from the list (#.##E#) The effective point-source NEFD (usually after weighting or at the mapping step) shown in the effective mapping units times sqrt(s). [] bracketed models are solved via median estimators (#) Indicates the time resolution of the given step as the number of downsampled frames, when this is not the default full resolution. Reduction-step Shorthands ------------------------- a Decorrelating amplifier boards (e.g. LABOCA,GISMO). am Remove correlated telescope acceleration (magnitude) ax Remove correlated telescope x acceleration ay Remove correlated telescope y acceleration B Decorrelating electronic boxes (e.g. LABOCA). C Solving for correlated noise and pixel gains The time resolution (in frames) is shown in brackets if not full resolution, followed by the number of unflagged pixels if gains are solved and a gain range is defined for flagging. c Decorrelating electronic cables (e.g. LABOCA) or geometric columns (e.g. GISMO) cx Remove correlated chopper x position cy Remove correlated chopper y position D Solving for pixel drifts (1/f filtering). dA(%) despiking absolute deviations. In brackets it shows the percentage % of frames flagged in the data as spiky by any method. dF(%) despiking wider features. dG(%) like above but proceeds gradually. dN(%) despiking using neighbouring samples in the timeseries. E Remove correlated SAE error signal (GISMO) G Estimating atmospheric gradients accross the FOV. J De-jumping frames (e.g. for GISMO). Followed by two colon (:) separated numbers: the first, the number of de-jumped blocks that were re-levelled and kept; and the number of blocks flagged. m Decorrelating on multiplexers (e.g. GISMO, SCUBA-2) Mf Filtering principal telescope motion. O Solving for pixel offsets p Decorrelating on (virtual) amplifier pins (e.g. GISMO) Q Decorrelating wafers (e.g. ASZCA). r Decorrelating on geometric rows (e.g. SHARC-2, GISMO) t Solving for the twisting of band cables (e.g. LABOCA). tx Remove correlated telescope x position ty Remove correlated telescope y position tW Estimating time weights. W Estimating pixel weights. w Estimating pixel weights using the 'differential' method. wh(x.xx) Noise whitening. The average source throughput factor from the whitening is shown. Source model-specific: ---------------------- Map Calculating source map from scan. The effective point source NEV/NEFD of the instrument is shown in the brackets (e.g. as Jy sqrt(s)). [C1~#.##] Filtering corrections are applied directly and are #.## on average. [C2=#.##] Faint structures are corrected for filtering after the mapping via an average correction factor #.##. [Source] Solving for source map. {check} Discarding invalid map data. {despike} despiking scan maps before adding to composite. {filter} Spatial large-scale structure (LSS) filtering of the scan maps. (level) Zero levelling to map median. (smooth) Smoothing map. (filter) Spatial filtering of the composite map. (noiseclip) Clip out the excessively noisy parts of the map. (filter) Filtering large scale structures (i.e. sky-noise). (exposureclip) Clipping under-exposed map points. (blank) Blank excessively bright points from noise models. (sync) Removing the source model from the time-stream. Thus, the lines on the console output: $ Round 4: $ $ [11564] D(128) C245 W(8.44E-2)245 dN(0.0%)245 B245 c245 Map(8.44E-2) $ [Source] {check} (smooth) (clip:4.0) (blank:10.0) (sync) Are interpreted as such: In the 4th iteration, the following steps are performed on scan 11564: D(128) -- 1/f filtering (on 128 frame timescale) C245 -- removing the correlated noise from the array with 245 pixels surviving the gain range criterion. W(8.44E0-2) -- Deriving pixel weights with the 'rms' method. The average sensitivity of the array is estimated to be 84.4 mJy sqrt(s). dN(0.0%)245 -- Despiking via the 'neighbours' method, with 0.0% of the data flagged as spikes, and 245 pixels surviving the flagging of pixels with too many spikes. B245 -- Decorrelating on amplifier boxes, with 245 pixels having acceptable gains to the correlated signals. c245 -- Decorrelating on electronic cables, with 245 pixels showing acceptable gain response to these signals. Map(8.44E-2) -- Mapping scan, with estimated point source NEFD of 84.4 mJy sqry(s). The, at the end of the iteration a composite source model is created. This is further processed as: {check} -- Remove invalid data from the scan maps (before adding to composite). (smooth) -- smooth composite by the desired amount. (clip:4.0) -- Discard map points below an S/N of 4.0 from composite. (blank:10.0) -- Do not use bolometer data in other steps, which are collected over the map points with S/N > 10, thus reducing the bias that bright sources can cause in the reduction steps. (sync) -- Synch the composite model back to the time-stream data. Once the reduction is done, various files, including the source map, are written. This is appropriately echoed on the console output. 2.3 Pointing Corrections ======================== Reducing the data with the correct pointing can be crucial (esp. when attempting to detect/calibrate faint point sources). At times the pointing may be badly determined at the time of observation. Luckily, getting the exact pointing offset wrong at the time of the observation has no serious consequences as long as the source still falls on the array, and as long as the exact pointing can be determined later. If you, at the time of the data reduction, have a better guess of what the pointing was at the time the data was taken (e.g. by fitting a model to the night's pointing data), you can specify the pointing offsets that you believe were more characteristic to the data, by using the '-pointing' option *before* listing the scans that should be reduced with the given corrections. E.g., > crush [...] -pointing=12.5,-7.3 <scans> Will instruct minicrush that the 'true' pointing was at dAZ=12.5 and dEL=-7.3 arcsec each (i.e. it works just like pcorr on APECS). Some instruments, like SHARC-2, may allow specifying the aggregated pointing offsets (e.g. 'fazo' and 'fzao') instead of the differential corrections supplied by 'pointing'. Obtaining Corrections --------------------- Good practice demands that you regularly observe pointing/calibration sources near your science targets, from which you can derive appropriate corrections. CRUSH provides the means for a analyzing pointing/calibration data effectively, using the 'point' option: > crush [...] -point [...] At the end of the reduction, CRUSH will analyze the map, and suggest appropriate pointing corrections (to use with '-pointing', or other instrument-specific options), and provide calibration information as well as some basic measures of source extent and shape. After Reduction (a poor alternative) ------------------------------------ You can also make pointing changes after the reduction (alas, now in RA/DEC). You can read off the apparent source position from each separately reduced scan (e.g. by using 'show' and placing the cursor above the apparent source center/peak). Then you can use 'imagetool' to adjust the pointing. E.g.: > imagetool [...] -origin=3.0,-4.5 ... The above moves the map origin by 3" in RA and -4.5" in DEC. Then, other crush tools (like coadd, imagetool etc.) will use these images with the proper alignment. Clearly, this method of adjusting the pointing is only practical if your source is clearly detected in the map. 2.4 Recovery of Extended Emission ================================== As a general rule, ground based instruments (in the mm and submm) are only sensitive to structures smaller than the Field-of-View (FoV) of the instrument. All scales larger than the FoV will be strongly degenerate with the bright correlated atmosphere (a.k.a. sky noise), and will be very difficult, if not outright impossible, to measure accurately. In a sense, this is analogous to the limitations of interferometric imaging, or the diffraction-limited resolution of finite apertures. However, there is a some room for pushing this limit, just like it is possible to obtain some limited amount of super-resolution (beyond the diffraction limit) using deconvolution. The limits to recovery of scales over the FoV is similar to those of obtaining super-resolution beyond the diffraction limit. Both these methods 1. yield imperfect answers, because the relevant spatial scales are poorly measured in the first place. 2. are only feasible for high signal-to-noise structures. 3. can, at best, go beyond by a factor of a few beyond the fundamental limit. You can usually choose to recover more extended emission if you are willing to put up with more noise on those larger scales. This trade-off works backwards too -- i.e., you can get cleaner maps if you are willing to filter them more. As a general principle, structures that are bright (>> 5-sigma) can be fully recovered to up to a few times the field-of-view (FOV) of the bolometer array. However, the fainter the structure, the more it will be affected by filtering. Generally, the fainter the reduction mode, the more filtering of faint structures results, and the more limited the possibility of recovering extended structures becomes. The table below is a rough guide of what maximum scales you may expect for such faint feaures, and also, how noise is expected to increase on the large scales as these are added in 'extended' mode: | Table 1. Maximum faint structure scales for S/N <~ 5 | | | | (compact) extended Noise power | | (DEFAULT) with size l | =========================================================== | bright | | (DEFAULT) | FOV/2 ~FOV ~l^2 | | | | 2*sourceSize | faint, deep | or FOV/2 ~l | | 2*beam | ----------------------------------------------------------- Iterating longer, will generally help recover more of the not-too-faint large-scale emission (along with the large-scale noise!). Thus, > crush -extended -rounds=50 [...] will generally recover more extended emission than just the default > crush -extended [...] (which iterates 10 times). In general, you may expect to recover significant (>5 sigma) emission up to scales L as: L ~= FoV + sqrt(N v T) in terms the number of iteration N, limiting Field-of-View (FoV), scanning velocity v and correlated noise stability time-scale T. Unfortunately, the correlated noise stability of most ground-based instruments is on the order of a second or less due to a highly variable atmosphere. At the same time, the noise rms on the large scales will increase assymptotically as: rms ~ sqrt(N) for large numbers of iterations. 2.5 Pixelization and Smoothing =============================== There seem to be many misconceptions about the 'correct' choice of pixelization and about the (mal)effects of smoothing. This section aims to offer a balanced account on choosing an appropriate mapping grid and on the pros and cons of applying smoothing to your maps. Pixelization (i.e. choosing the 'right' grid) --------------------------------------------- Misconception 1: You should always 'Nyquist' sample your map, with 2 pixels per beam FWHM, to ensure ideal representation. There is more than one thing wrong with the above idea. First, there is no 'Nyquist' sampling of Gaussian beams. The Nyquist sampling theorem applies only to data, which has a cutoff frequency (i.e. the Nyquist frequency), above which there is no information present. Then, and only then, will sampled data (at a frequency strictly larger[!] than twice the Nyquist cutoff) preserve all information about the signals. Secondly, 2-pixels per beam is almost certainly too few for the case of Gaussian beams. Here is why: Gaussian beams have no actual frequency cutoff -- the signal spreads across the spectrum, with information present at all frequencies, even if it is increasingly tapered at the higher frequencies. By choosing a sampling (i.e. pixel size in your map) the information above your sampling rate will be aliased into the sampled band, corrupting your pixelized data. Thus, you can choose a practical cutoff frequency (i.e. pixelization) based on the level of information loss and corruption you are willing to tolerate. At 2.5 pixels pear FWHM, you retain ~95% of the underlying information, and corrupt it by the remaining 5%. With more pixels per beam, you get more accurate maps. (The 2-pixels per beam, that many understand to be the Nyquist sampled, is certainly too few by most standards of accuracy!) Thus, the CRUSH default is to use around 5-pixels per FWHM, representing a compromise between completeness (~99%) and a senseless increase of map pixels (i.e. number of model parameters). Smoothing --------- Misconception 2: You should not smooth your maps, because it does something 'unnatural' to your 'raw' image. Again, there are two reasons why this is wrong. The first problem with this view is that there is no 'natural' or 'raw' image to start with, because there is no 'natural' pixelization (see above). Secondly, by choosing a map pixel size, you effectively apply smoothing -- only that your smoothing kernel is a square pixel, with an abrupt edge, rather than a controlled (and directionless) taper of choice. (To convice yourself that a map pixels apply smoothing, consider the mapping process: each pixel will represent an average of the emission from the area it covers. Thus, the pixel values are essentially samples of a moving integral under the pixel shape -- i.e. samples of the emission convolved [smoothed] by the pixel shape.) However, smoothing does have one real con, in that it degrades the effective resolution of the image. Consider smoothing by Gaussian kernels. Then, the image resolution (imageFWHM) increases with the smoothing width (smoothFWHM) as imageFWHM^2 = beamFWHM^2 + smoothFWHM^2 from the instrument resolution (beamFWHM). However, you can smooth a fair bit before the degradation of resolution becomes a problem. If you use sub-beam smoothing with smoothFWHM < beamFWHM, then the relative widening is ~ 0.5 * (smoothFWHM / beamFWHM)^2 Thus, smoothing by half a beam, degrades resolution by ~12% only... At the same time, smoothing can bring many benefits: * Improves image appearance and S/N by rejecting pixelization noise. It is better to use a finer grid and smooth the image by the desired amount, than to pixelize coarsely -- not only for visual appearance but also for the preservation of information. * Smoothing can help avoid pathological solutions during the iterations, by linking nearby pixel values together. * Beam-smoothing is the optimal (Wiener) filtering of point sources in a white noise background. Thus, smoothing by the beam produces the highest signal-to-noise measurement of point sources. * Beam-smoothing is mathematically equivalent to fitting fixed-width beams at every pixel position. The beam-smoothed flux value in each pixel represents the amplitude of a beam fitted at that position using chi-squared minimization. Thus, beam smoothed images are the bread-and- butter of point source extraction (e.g. the 'detect' tool of CRUSH). Given the pros and the con for smoothing, the different reduction modes (default, 'faint' or 'deep') of CRUSH make different compromises. The default reduction aims to provide maps with maximal resolution (i.e. no smoothing), although a some smoothing is used during the iterations to aid convergence to a robust solution. 'faint' mode reductions smooth by 2/3 beams (resulting in ~22% degradation of resolution), to provide better signal-to-noise at a moderate loss of spatial resolution. Finally, 'deep' reductions yield beam smoothed images, which are ideal for point source extraction, even if some spatial details are sacrificed. You can change/override these default settings. The smoothing during reduction is controlled by the 'smooth' setting, which can take both a Gaussian FWHM (in arcsec) as its arguments as well as the preset values 'minimal', 'halfbeam', '2/3beam' and 'beam'. The smoothing of the final map can be controlled by 'final:smooth' (the 'final' stands as a shorthand conditional for the last iteration). Thus, > crush [...] -smooth=halfbeam -final:smooth=minimal [...] smoothes by half a beam during the iterations, and only slightly for the output image, while > crush [...] -forget=smooth -final:forget=smooth [...] disables smoothing both during reduction and at the end. The table below summarizes the effect of the preset smoothing values, indicating both the degradation of resolution (relative to the telescope beam) and the relative peak S/N on point sources: | Table 2. Smoothing properties | ------------------------------------- | setting widening rel. S/N | ===================================== | minimal 6% 0.33 | halfbeam 12% 0.50 | 2/3beam 22% 0.67 | beam 41% 1.00 | ------------------------------------- 2.6 Image processing post-reduction ==================================== CRUSH also provides 'imagetool' for post-processing images after reduction. The typical usage of imagetool is: > imagetool -name=<output.fits> [options] <input.fits> which processes <input.fits> with the given options, and writes the result to <output.fits>. With imagetool, you can apply the desired level of smoothing (-smooth=X), or spatial filtering (-extFilter=X), specify circular regions to be skipped by the filter (-mask=<filename>). You can adjust the clipping of noisy map edges by relative exposure (-minExp=X) or by relative noise (-maxNoise=X). You can also crop a rectangular region (-crop=dx1,dy1,dx2,dy2). There are many more image operations. See the imagetool manual (in your UNIX shell, or online) or simply run 'imagetool' without an argument: > imagetool One of the useful options allows to toggle between the noise estimate from the detector time-streams (-noise=data) and from the image itself (-noise=image) using a robust estimator. For example, after spatial filtering, you probably want to re-estimate the map noise: > imagetool [...] -extFilter=45.0 -noise=image <input.fits> The built-in image display tool 'show' also takes all processing options of imagetool, but rather than writing the result, it will display it in an interactive window. See the manual page of 'show' (either inside your UNIX shell, or online), or run 'show' without an argument. 2.7 Reducing Very Large Datasets ================================ Coming soon... 2.8 Custom Logging Support ========================== CRUSH (as of version 2.03 really) provides a poweful scan/reduction logging capability via the 'log' and 'obslog' keys. 'log' writes the log entries for each scan *after* the reduction, whereas 'obslog' does not reduce data at all, only logs the scans immediately after reading them. While both logging functions work identically, some of the values are generated during reduction, and therefore may not be available to 'obslog'. Log Files and Versioning ------------------------ You can specify the log files to use with the 'log.file' and 'obslog.file' keys (the default is to use <instrument>.log and <instrument>.obs.log in the 'outpath' directory). Equivalently, you can also set the filename directly as an optional argument to 'log' and 'obslog': > crush [...] -log=myLog.log [...] You can specify the quantities, and the formatting in which they will appear, using the 'log.format' and 'obslog.format' keys. Below you will find more information on how to set the number formatting of quantities and a list of values available for the logging. A log file always has a fixed format, the one which was used when creating it. Therefore, a conflict may arise if the same log file is specified for use with a different format. The policy for resolving such conflicts can be set via the 'log.conflict' and 'obslog.conflict' keys, which can have one of the following values: overwrite Overwrites the previous log file, with a new one in the newly specified format version Tries to find a sub-version of the same log file (with .1, .2, .3 ... extension added) in the new format, or create the next available sub-version. The default behaviour is to assume versioning, in order to preserve information in case of conflicts. Decimal Formatting of Values ---------------------------- Many of the quantities you can log are floating point values, and you have the possibility of controlling how these will appear in your log files. Simply put one of the formatting directives in brackets after the value to specifts its format. E.g. the keys 'RA' or 'RAh' will write the right-ascention coordinate either as radians or as hours, with the default floating point formats. However, 'RA(h:2)' will write the value in human-readable 'hh:mm:ss.ss' format, whereas 'RAh(f3)' will express it as 'hh.hhh'. You can choose from the following formats to express various quantities. Be careful, because not all formats are appropriate to all types of data. (For example, you should not try to format angles expressed in degrees with the DMS formatting capabilities of the 'a' angle formats. Use these only with angles expressed in radians!) d0...d9 Integer format with 0...9 digits. E.g. 'd3' will write Pi (3.1415...) as 003. e0...e9 Exponential format with 0...9 decimals. E.g. e4 will write Pi as 3.1415E0. f0...f9 Floating point format with 0...9 decimals. E.g. f3 will write Pi as 3.141. s0...s9 Human readable format with 0...9 significant figures (floating point or exponential format, whichever is more compact). a:0...a:3 as0...as3 al0...al3 Angle format with 0...3 decimals on the seconds. E.g. 'a:1' produces angles in ddd:mm:ss.s format. Use only with values expressed as radians (not in degrees!). As such, 'a:1' will format Pi as '180:00:00.0'. The difference between the 'a:', 'as', and 'al' formats is the separators used between degrees, minutes and seconds (colons, symbols, and letters respectively). h:0...h:3 hs0...hs3 hl0...hl3 Hour-angle format (e.g. for RA coordinate) with 0...3 decimals on the seconds. E.g. 'h:2' formats angles in 'hh:mm:ss.ss' format. Use only with values expressed as radians (not in degrees!). As such 'h:2' will format Pi as '12:00:00.0'. The difference between the 'h:', 'hs', and 'hl' formats is the separators used between hours, minutes and seconds (colons, symbols, and letters respectively). E.g. Pi will be: h:1 12:00:00.0 hs1 12h00'00.0" hl1 12h00m00.0s t:0...t:3 ts0...ts3 tf0...tf3 Time format with 0...3 decimals on the seconds. E.g. 't:1' formats time in 'hh:mm:ss.s' format. Use only on time values expressed in seconds! The difference between the 't:', 'ts', and 'tl' formats is the separators used between hours, minutes and seconds (colons, symbols, and letters respectively). Quantities and values that can be logged ---------------------------------------- Currently, CRUSH offers the following quantities for logging. Directives starting with '?' will log the values of configuration keys. Other quantitites reflect the various internals of the scan or instrument state. More quantities will be added to this list in the future, especially values that are specific to certain instruments only. Keep an eye out for changes/addititions :-). ?<keyword> The value of the configuration <keyword>. If the configuration option is not defined '---' is written. If the keyword is set without a value, then '<true>' is written. AZ Azymuth coordinate (in radians). E.g. 'AZ(a:1)' produces ddd:mm:ss.s formatted output. See also 'AZd' and 'EL'. AZd Azymuth coordinate in degrees. E.g. 'AZd(f2)' produces output in ddd.dd format. See also 'AZ' and 'ELd' channels Number of channels proce
Source: README, updated 2015-12-14

Thanks for helping keep SourceForge clean.

Screenshot instructions:
Red Hat Linux   Ubuntu

Click URL instructions:
Right-click on ad, choose "Copy Link", then paste here →
(This may not be possible with some types of ads)

More information about our ad policies

Briefly describe the problem (required):

Upload screenshot of ad (required):
Select a file, or drag & drop file here.

Please provide the ad click URL, if possible:

Get latest updates about Open Source Projects, Conferences and News.

Sign up for the SourceForge newsletter:

JavaScript is required for this form.

No, thanks