Frequently Asked Questions
Compiling
What version of the C++ is needed?
We use C++17. Most modern compilers support the features we need.
The code doesn’t compile, but complains right away that there is “No rule to make target ‘AMReX_constants_mod.o’, needed by ‘tmp_build_dir/d/2d.gnu.MPI/f90.depends’”
The environment variable
AMREX_HOME
needs to be the full path to theamrex/
directory. You cannot use ‘\(\sim\)’ as a shortcut for your home directory.make issues an error like:
$ make Loading /home/zingale/development/amrex//Tools/GNUMake/comps/gnu.mak... Loading /home/zingale/development/amrex//Tools/GNUMake/sites/Make.unknown... /home/zingale/development/amrex//Tools/GNUMake/Make.rules:476: tmp_build_dir/d/3d.gnu.MPI.EXE/f90.depends: No such file or directory make: *** No rule to make target `meth_params.F90', needed by `tmp_build_dir/d/3d.gnu.MPI.EXE/f90.depends'. Stop.
You need to use GNU make version 3.82 or later.
Running
How do we turn off all the initial projections to look at the initial velocity field as specified in initdata, instead of as modified by the velocity constraint?
maestro.max_step = 1 maestro.init_iter = 0 maestro.init_divu_iter = 0 maestro.do_initial_projection = false
MAESTROeX crashes because the multigrid algorithm fails to converge—how do I get around this?
Setting general convergence criteria for multigrid is as much art as science. First, it is important to determine if the multigrd solver is close to convergence and just dancing around near the desired tolerance, but never reaching it, or if it is no where near convergence. For the latter, it may be that the multigrid solver was fed bad data and the problem arose in one of the earlier steps. To get more detail information from the multigrid solver, set mg_verbose to a positive integer from 1-4 (the higher the number the more information you receive.
If the multigrid solver is failing during one of the initial “divu” iterations, it may be because the velocity is initially zero, so there is no velocity magnitude to use as a reference for convergence, and that (\(S - \bar{S}\)) is very small (or zero). In this case, it is usually a good idea to perturb the initial state slightly, so the righthand side is non-zero.
The tolerances used for the various multigrid solves in the code can be overridden on a problem-by-problem basis by setting the relevant parameters in the problem’s inputs file (see the solver tolerances section in the § runtime parameters tables).
Why do the initial projection and “divu” iters sometimes have a harder time converging than the multigrid solves in the main algorithm?
The initial projection and “divu” solve sets the density to \(1\) (see § Initialization), so the coefficients in the elliptic solve are \(O(\beta_0) \sim O(\rho)\). But in the main algorithm, the coefficients are \(O(\beta_0/\rho) \sim O(1)\). Since \(\rho\) can vary a lot, the variation in the coefficients in the initial projection and “divu” solve present a harded linear system to solve.
How can I obtain profiling information for my run?
The code is already instrumented with timers. Simply compile with
TINY_PROFILE=TRUE
in theGNUmakefile
, or equivalently domake TINY_PROFILE=TRUE
. A summary of the timings will be output tostdout
at the end of the run.With the GNU compilers, you can enabling profiling with
gprof
by compiling withUSE_GPROF=TRUE
in your GNUmakefile.
When you run, a file named
gmon.out
will be produced. This can be processed withgprof
by running:gprof exec-name
where
exec-name
is the name of the executable. More detailed line-by-line information can be obtained by passing the-l
argument togprof
.How can I force MAESTROeX to output?
To generate a checkpoint file, in the output directory do:
touch dump_and_continue
For a plotfile:
touch plot_and_continue
or a small plotfile:
touch small_plot_and_continue
At the end of a timestep, the code will check if these files exist and if so do an output and then remove the file.
How can I check the compilation parameters of a MAESTROeX executable?
The build information (including git hashes, modules, EoS, network, etc.) can be displayed by running the executable as
./Maestro.exe --describe
Debugging
How can we dump out a variable to a plotfile from any point in the code?
#include <AMReX_VisMF.H> VisMF::Write(uold[0],"a_uold"); VisMF::Write(umac[0][0],"a_umacx");
This plotfile is visualized using Amrvis using the flag
-mf
.How can I print out a MultiFab’s contents from within the code?
There is a print subroutine in
MaestroDebug.cpp
file. This can be simply called asPrintMF(a);
where
a
is a MultiFab (multi-level).How can I debug a parallel (MPI) job with gdb?
If you only need to use a few processors, the following command will work:
mpiexec -n 4 xterm -e gdb ./Maestro2d.gnu.ex
where the executable needs to be created with the
-g
flag to the compiler. This will pop up multiple xterms with gdb running in each. You need to then issue:run inputs
where inputs is the desired inputs file in each xterm.
How can I get more information about floating point exceptions?
AMReX can intercept floating point exceptions and provide a helpful backtrace file that shows you where they were generated.
How can I get information about potential bugs before running the code?
We run clang-tidy on all pull requests using a GitHub action.
clang-tidy
analyzes the source code, produces warnings for potential bugs and offers suggestions for performance improvements.clang-tidy
can also be run locally. This requires theclang-tidy
andbear
packages (installed using e.g.sudo apt install bear clang-tidy
on Ubuntu), and the python scriptrun-clang-tidy.py
(which can be downloaded from here). The analysis is performed by first compiling a problem using thebear
package, then running the python script to analyze the source files. From within a problem directory, runbear make -j 20 USE_OMP=FALSE USE_MPI=FALSE DEBUG=TRUE python3 run-clang-tidy.py -header-filter='MAESTROeX' -ignore-files='amrex|Microphysics' -j 20 > clang-tidy-report.txt
The compiler flags can be modified to suit the problem to be analyzed, but the
DEBUG
flag must be set toTRUE
. Theheader-filter
option for the python script tells the script to only analyze header files containing the given regex pattern, and theignore-files
flag tells it to ignore any source files containing the given regex pattern. The-j
option tells the script to run a given number of processes in parallel. The output is then redirected to a text file.
I/O
How can I tell from a plotfile what runtime parameters were used for its run? or when it was created?
In each plotfile directory, there is a file called
job_info
(e.g.plt00000/job_info
) that lists the build directory and date, as well as the value of every runtime parameter for the run.How can I force the code to output a plotfile / checkpoint file at the next step?
In the output directory (where the code is running) do
touch .dump_plotfile
. This will create an empty file called.dump_plotfile.
At the end of each step, if the code finds that file, it will output a plotfile. Simply delete the file to restore the code to its normal plotfile behavior.Similarly, creating the file
.dump_checkpoint
will force the output of a checkpoint file.
Algorithm
Why is MAESTROeX so “hard” to use (e.g. as compared to a compressible code)?
There are several complexities to the algorithm that don’t have straightforward compressible counterparts. These mainly involve the role of the base state and the constraint equation.
Care must be taken to setup an initial model/initial base state that respects the thermodynamics in MAESTROeX and is in hydrostatic equilibrium. Best results are attained when the model is processed with the MAESTROeX EOS and reset into HSE, as is done in the initial_model routines. Because MAESTROeX builds off of the base state, any flaws in that initial state will influence the subsequent behavior of the algorithm.
The constraint equation brings another complexity not seen in compressible codes—information is instantly communicated across the grid. In compressible codes you can track down a problem by watching where it starts from and watching it move one cell per dt. In MAESTROeX things can go wrong in multiple places without it being obvious where the root problem is.
In the final projection in the algorithm, we project \(U^{n+1}\) , using a time-centered \(\beta_0\) , a time-centered \(\rho_0\) , but an “\(n+1\)” -centered \(S\) . Why then is the resulting \(\phi\) (which then defines \(\pi\) ) is at “\(n+1/2\)” ?
The short answer to this question is that you should think of this as really projecting \((U^{n+1} - U^n)\) and the right hand side as having \((S^{n+1} - S^n)\). This is because the pressure enters the dynamic equations as \((U^{n+1} - U^n) = \ldots + \frac{1}{\rho^{n+1/2}} \nabla \pi^{n+1/2}\). (We approximate \(\pi^{n+1/2}\) by \(\pi^{n-1/2}\) then do the projection to fix the \(\pi\) as well as the \(U\).)
So everything is in fact time-centered.
Why is \(\gammabar\) computed as the average of the full state \(\Gamma_1\) instead of computed from the base state density and pressure via the equation of state?
The primary reason is that there is no base state composition. The base state density is simply the average of the full state density, and the base state pressure is the pressure required for hydrostatic equilibrium. There is no thermodynamic relationship enforced between these base state quantities.
Can I run a full star in 2-d axisymmetric geometry?
No. This is a design decision. There is no support for axisymmetric coordinates in MAESTROeX. Spherical problems must be run in 3-d.
Why did we switch all the equations over to the \(\tilde{\Ub}\) form instead of just working with \(\Ub\) ?
This is basically a numerical discretization issue. Whenever the base state aligns with the grid, you should be able to show that you get exactly the same answer each way.
When you do a spherical star on a 3d Cartesian grid, though, the \(w_0\) is defined on the radial mesh and the \(\tilde{\Ub}\) on the Cartesian mesh, and the \(w_0\) part never experiences the Cartesian projection, for example. So there are differences in exactly how the \(w_0\) component appears (projected on the Cartesian mesh vs. interpolated from the radial mesh)—we made the decision at the time to separate the components for that reason.
Why does “checkerboarding” appear in the velocity field, especially in regions where the flow is stagnant?
Checkerboarding can arise from the projection—it doesn’t see that mode (because it is an approximate projection) so it is unable to remove it. This allows the pattern to slowly build up. There are filtering techniques that can be used to remove these modes, but they are not implemented in MAESTROeX.
Analysis
I want to open a plotfile, derive a new quantity from the data stored there, and write out a new plotfile with this derived data. How do I do this?
One implementation of this can be found in
amrex/Tools/Postprocessing/C_Src/PtwisePltTransform.cpp
. This reads in the plotfile data using the theAMReX_DataServices
class, performs a transformation on the data based on a set of components specified in the command line, and outputs the solution to a new plotfile.