In my last post, I ran a full r-process calculation. The calculation, which took 15 hours on my laptop, used the default "arrow" matrix solver in wn_matrix. In this post I will show how I used a Krylov-space iterative solver to accelerate the r-process calculation.
SPARSKIT is a basic tool-kit for sparse matrix calculations and is written and maintained by Yousef Saad. Our code module wn_sparse_solve interfaces our basic matrix routines in wn_matrix with SPARSKIT.
To use SPARSKIT2, the latest version, in the r-process calculation, I first set the environment variable NNT_USE_SPARSKIT2 and recompiled the network code. The network code can either be the base example codes in examples/network, or, for my present purposes, the one with my user-defined trajectory. I typed:
I set the environment variable
and cleaned and remade the code
This downloaded wn_sparse_solve and SPARSKIT2 and rebuilt the network code. SPARSKIT2 is written in Fortran, so the default compilation will use gfortran. gfortran should automatically be part of the gcc compiler collection, so you should have no problem with the compilation, but, if you do, you might update gcc.
Once you have successfully compiled the code, you can run it as before. I used the following ../../data_pub/zone.xml file:
<zone_data> <zone> <optional_properties> <property name="tend">1.e6</property> <property name="tau_0">0.035</property> <property name="tau_1">1.</property> <property name="munuekT">-inf</property> <property name="t9_0">10.</property> <property name="steps">20</property> <property name="rho_0">1.4985e6</property> <property name="rho_1">1.5e3</property> <property name="iterative solver method">gmres</property> <property name="t9 for iterative solver">2.</property> </optional_properties> <mass_fractions> <nuclide name="n"> <z>0</z> <a>1</a> <x>0.67</x> </nuclide> <nuclide name="h1"> <z>1</z> <a>1</a> <x>0.33</x> </nuclide> </mass_fractions> </zone> </zone_data>
Notice the addition of the two properties iterative solver method and t9 for iterative solver. The former tells the code to use the iterative solver method gmres. I've had good luck with that solver, but you can try other solvers such as bicgstab. See wn_sparse_solve for more choices. The latter property tells the code when to start using the iterative solver instead of the default solver. In the above, I set the property so that the code used the gmres solver for temperatures below T9 = 2. Experience has shown that above this temperature, the iterative solver has trouble converging, at least for the default preconditioner.
With these preliminaries completed, I ran the code by typing:
./run_single_zone ../../data_pub/my_net.xml ../../data_pub/zone.xml my_output.xml "[z <= 90]"
The execution time on my laptop dropped to 3.5 hours. This is better than a factor of four increase in execution speed. In my next posts, I will explore the reason for this speed up.