Download Latest Version hammurabi_v3.01.tar.gz (984.3 kB)
Email in envelope

Get an email when there's a new version of Hammurabi Code

Home / supplementary / planck_pfiles
Name Modified Size InfoDownloads / Week
Parent folder
sun10b_hiR_30.txt 2016-06-03 1.8 kB
sun10b_Rg2_30.txt 2016-06-03 2.0 kB
README 2016-06-03 1.7 kB
jansson12c_hiR_30.txt 2016-06-03 1.9 kB
jansson12c_Rg2_30.txt 2016-06-03 2.1 kB
jansson12b_Rg2_30.txt 2016-06-03 2.1 kB
jansson12_hiR_30.txt 2016-06-03 1.6 kB
jansson12_Rg2_30.txt 2016-06-03 1.8 kB
jansson12b_hiR_30.txt 2016-06-03 2.1 kB
jaffe13b_Rg2_30.txt 2016-06-03 2.7 kB
jaffe13b_hiR_30.txt 2016-06-03 2.5 kB
Totals: 11 Items   22.3 kB 0
These parameter files reproduce the models discussed in the paper

	Planck intermediate results. XLII. Large-scale Galactic magnetic fields

As described in that paper, the integration is split into R<2kpc and
R>2kpc simulations.  This is done by running hammurabi twice for each
model using the "hiR" and "Rg2" parameters.  (The results are written
in an "out" subdirectory that must be created before running.)  The
resulting maps of I, Q, and U can then be added together (since these
simulations are at 30GHz, well outside the Faraday regime).  For the
results in the paper, a set of 10 realizations, each with a
numerically simulated Gaussian random field (GRF), was generated for
each model.  Then for each pixel, the mean and RMS variation among
those realizations was plotted in order to characterize the average
morphology and the expected deviations, i.e. the "galactic variance".

These models require the latest version of hammurabi built with
Galprop.  The Galdef files are included in the GALDEF directory above.
A few input files needed by Galprop are included in the tar file for
the modified code.

Note that though 30GHz is above the Faraday regime, these parameter
files also specify the computation of the Faraday RM map for
comparison.  This requires the thermal electron density model given as
a Cartesian grid, which can also be found in the main directory
(negrid_n400.bin).

Note that these are set to do the full-resolution simulations as
described in the paper.  On my linux cluster, they take roughly a
total of 24 hours each.  When compiled with openMP and run
parallelized, for example on a node with a dozen processors, that
reduces to 2 hours of wall time.




Source: README, updated 2016-06-03